How is a content-aware fill feature, no matter how cool it seems, remotely related to strong AI?
I see no evidence from the video that this feature incorporates any type of learning. It's presented as simply an algorithm, awesome though it may be, that uses statistics/sampling to regenerate pixel info in an area based on the area's surroundings. But the entire algorithm was designed by humans. The software didn't come up with a way to do that on its own.
Not everything that looks cool is an application of machine learning. Wake me up when photoshop CS 10 evolves its own algorithms to do stuff like this based on the user doing it a dozen times on different areas in different images.
To me, AI ≠ learning... at least not entirely.
I'm talking about the neural processing that goes on in vision systems etc. Just like how what we "see", very much of it is pre-processed by the eye itself, and the brain itself uses many tricks like content awareness, area-averaging etc. as "shorthand" to simulate full-vision to the brain. Any number of optical illusions out there handily prove what we do and don't "really see".
I'm just commenting on the potential for emergent behaviors from coordinated groupings of "clever" reflexes like this, where the core learning/decision module is suddenly relieved of much of what would otherwise seem to be an overwhelming task, taking on the real-world with brute force computing. When I see such "clever" behavior in what is really a run of the mill consumer product, a desktop image editing suite, you have to wonder what the people who are really working on AI, expert systems etc. are coming up with, or what they can leverage by utilizing similar code/technology.
This approach has been extremely fruitful in robotics, and most all of the recent successes have been because of it. However, people tend to be dismissive or even overtly hostile to the approach very quickly, because under it all, I suspect it's unsettling to think that what you think of as "you" may be the sum total of such smaller non-intelligent systems, rather than some monolithic "soul". Or conversely, that something that may one day arguably posses human level sentience is just a collection of these behaviors.
And in parallel, we're slowly learning that such reflexive shorthand systems and sub-processing does go on in humans. And some of it is disconcertingly similar to the non-AI, non-brute force "tricks" that robotic and software vision/graphic fields have come up with. For instance, there is a man who is legally blind because his vision centers were destroyed in a stroke. However, if a person stands in front of him and makes a face, he can still tell with 100% accuracy what expression they're making.
He can't read, see color, light or dark, edges or shapes. He can't "see" anything, including people. His world is black. However, the part of his brain that recognizes facial expressions is still connected to his eyes and optic nerve, and he can now navigate a maze if they put smiley and frowny faces on the walls...
If AI, or a reasonable fascimilie of it can be created one day as an emergent property from such sub-systems, AND if it becomes medically proven that if you damage and remove enough of these systems from a person and they "cease to be" at some point... It's as potentially unsettling as the evolution/creationism debate was, and still is. And it has many curious and frightening ramifications about what it means to be human, and what intelligence/sapience/sentience really is.
AJ's personal SHTF scenario is Skynet, so you'll have to excuse his enthusiasm.
It probably bothers people
more if they remember that I also strongly suspect that technological singularity/strong-AI may also be our one best shot at salvation/survival as a species at the same time.