6948 Dual processors running in parallel.

I read that some computer engineer calculated the number of processors running in parallel necessary to simulate the human brain. ... don't remember the number but it wasn't all that large.
Are we approaching the time when a computer could actually become sentient?A 1000 trillion floating point operations a second! We can't be far away.
Sentience is more than a bunch of floating point operations. A computer can't be "sentient", any more than a toaster oven can - its an appliance - essentially a big, fast abacus. The reason that computers, in some cases SEEM to be sentient is that they are executing a HUMAN'S decisions stored inside them. We know far more about the dark side of the moon and the bottom of the ocean than we know about how the human brain works, and how what we call the mind and soul is tied into that. There is evidence that, on some levels, the human "sentience" is already operating quantumly. A computer has never had a thought, told someone how it "feels", fallen in love, hated someone or something, made a decision, had an emotion, etc. We can come close to medling a cockroach's behavior, but since it can live for two weeks with its head cut off, that's not really such a big deal...
Reminds me of Vernor Vinge's seminal work "True Names" where a cabal of hackers meet in cyberspace. (where hiding your "true name" is key to remaining free from government prosecution. Which is an allegory to olden days/magic where knowing something's true name gave you power over it.)
A mysterious new hacker joins the group, but only gives suggestions and clues in simple text messages every few days, whereas the near-future computing allows them to meet real-time in a 3-D multi-player environment with different fancy character avatars, like better versions of the multi-player games we have now.
[Spoiler]
The new "hacker" turns out to be an abandoned experimental government AI that was believed not to work, but it turns out it did, just that it was very slow. Even with the exponentialy better computing you'd expect 50 years from now, it took a few days of crunching for the AI to simulate 5-10 minutes of human conciousness, which is why it was only dropping messages and clues to other human hackers to get them to help it.
[end spoiler]
The problem with AI won't be computing, the economic incentives for ever more power are already in place to ensure it keeps increasing, probably well past the point where it's needed for sentience to happen. It's more that we have no clue what kind of software is needed. Or if trying to model human inteligence and self-awareness, as discreete concepts such as hardware (neurons) and software (your soul?) even really have any meaning. In that case, AI as we and fiction commonly concieve of it, may not be possible in a completely computational environment. Perhaps it will be IA or 'Inteligence Augmentation' instead, where humans do what they do best, volition, intuition, and decisions, and machines do what they do best, store, manipulate, and sort data,
Or maybe
even we really aren't sentient in the way we think we are. Perhaps what we believe is sentience it's just an emergent quality from all the little expert systems in our brain working in concert, the visual cortex, speech, memory, etc. and for all of us there really is no "you", (scary thought) just the unique pattern of these sub-units working in concert which is the illusion of there being a "you".
In that case, AI might be easy, just bolt together enough systems or behaviors, (or more appropriately, software programs), one for speech, one for vision, one for memory, another to coordinate them etc. and keep tweaking until it starts producing cogent and self-directed results. This approach has already shown great promise in self-directed robotics, like the DARPA Challenge cars, and the "cockroach"-level robots. This approach might work, because the "artificiality" of an AI's "intelligence" may not really be any more artificial than your own is.
IMO, the stickier problem is, what do we do with AI's once we create them? What are our moral obligations to them? Are they "alive"? Do they have rights? Is there a threshold below which an AI is more akin to an "animal" and be modified or erased as need be, but above it, it's considered a human-equivalent? How do you measure that?