The Aurora Concept Browser: Is this the Future of the Computing Experience?

August 14 2008 / by Mielle Sullivan / In association with Future Blogger.net
Category: Technology   Year: General   Rating: 6 Hot

“Welcome to the future, at least one possible future anyway,” announces Mozilla Labs. Along with designers from Adaptive Path, Mozilla has released Aurora—a proposal for the visual and design components of what could be the future of, not only web browsing, but of the computing experience in general. In three dramatized videos, users retrieve, manipulate and utilize data with remarkable ease. Devices and computers communicate fluidly with the web and each other, pulling up relevant data quickly to help make plans. They even identify objects in the real world. At times it is hard to tell where the computer ends and the web begins. But is this really the future of computing? How can this all be made possible?


The Aurora concept browser differs from web browsers of today in three obvious ways. First, it incorporates all applications not just those that are connected to the web and thus replaces the desktop. Second, it attempts to make the experience primarily visual rather than textual. Finally, it takes full advantage of what the Semantic Web will hopefully have to offer.

After a few minutes of watching the concept video, you realize that Aurora bears little resemblance to today’s web browsers. For one thing, there is no distinction between applications and websites and there is no time when the web is accessed. Rather, the whole environment is constantly interacting with the web. Strictly speaking, the Aurora concept browser is not a web browser. It is a graphical user interface which anticipates that the web will be THE application and resource of future computing. All applications a computer may have, if they are not connected to the web, will serve only to enhance and facilitate the web experience. In other words, in the future, your desktop, your operating system, all your programs, and your web browser will merge into one user interface that is built around and inside the web.

Continue Reading

Will the Singularity Commoditize Intelligence?

October 02 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Economics   Year: General   Rating: 5

There’s an interesting comment thread occurring over at Kevin Kelly’s Technium blog attached to his Singularity critique. One of the more provocative statements pertains to the possibility of intelligence commoditization:

“The one thing the ‘Singularity’ will in fact be able to achieve will be the commoditizing of intelligence.” - John

Here’s my response:

The gradual commoditization of processes and basic intelligence has been underway for a while already. Certainly I can see the water level rising. But, if the proper intelligence growth model is collective and individual intelligence amplification (IA) (Flynn’s research would certainly suggest the latter) then we’ll keep evolving right alongside AI. Perhaps this will be a grow-and-become-more- novel/specialized-or-be-commoditized model, but it certainly leaves some room, even in an abrupt singularity scenario, for non-commoditization of some or most “human” intelligence (which I think is the wrong way to view intelligence, it’s more a system property that manifests in agents).

That being said, super-smart tech will be very disruptive in the coming decade and it remains to be seen how quickly we’ll amplify our intelligence, but I do think acceleration in info, tech and comm will up our ability to cope and devote more brains to higher level functions.

So what do you think?

Will accelerating change commoditize intelligence?

or Show Results

Spivack & Kelly Pushing Tech / Consciousness Boundaries, But How Deep is the Rabbit Hole?

November 05 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Technology   Year: 2008   Rating: 4 Hot

“The web is going to wake up. It is already awake because we are awake and we are a part of it.” – Nova Spivack, Singularity Summit 2008

With their recent blogologue concerning the evolution of consciousness, Kevin Kelly of Wired fame and Nova Spivack, creator of Twine, are spearheading a shift away from the commonly held view of a future in which Strong AI grows in a box, to one in which the Cloud or the Planet is the box. Both are striving to broaden the context in which terms like technology, information, intelligence, communication and consciousness are defined. This is a very necessary step as most of the recent theory and development has been dominated by reductionist AI and technology thinkers who seem to view such phenomena in a vacuum.

Clearly, technology, information, intelligence and consciousness (TIICC) do not exist in a vacuum. In his latest post, Kelly expands his definition of the emerging Technium to include the concept of meta-system transition (advanced by Turchin and Heylighen) that Spivack advocates. Thus, both are now in agreement that TIICC are dependent on the system, which is a very positive development, but also brings them out onto a slippery memeslope.

Because there is no such thing as a closed system (as Godel taught us), it is near-impossible, or perhaps fundamentally impossible, to create functional, highly-useful definitions of TIICC. Kelly and Spivack both concur with this reality:

Continue Reading

Kevin Kelly's Singularity Critique is Sound and Rooted in Systems Understanding

October 01 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Environment   Year: General   Rating: 1

The Singularity Frankenstein has been rearing its morphous head of late and evoking reactions from a variety of big thinkers. The latest to draw a line in the sands of accelerating change is Kevin Kelly, Wired co-founder and evolutionary technologist, who makes a compelling case against a sharply punctuated and obvious singularity. His argument is based on the following points:

1) A Strong-AI singularity is unlikely to emerge before Google does it first.

“My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet,” writes Kelly.

I agree that powerful intelligence far more likely to emerge as a property of the global brain and body in co-evolution with accelerating information growth than in a lab.

More fundamentally, I think our system is consistently advancing its intelligence, making human intelligence non-static. Therefore the notion of Strong AI is an illusion because our basis for comparison 1) is constantly changing, and 2) is erroneously based on a simple assessment of the computational power of a single brain outside of environmental context, a finding backed by cognitive historian James Flynn.

So yes, Google may well mimic the human brain and out-compete other top-down or neural net projects, but it won’t really matter because intelligence will increasingly be viewed as a network related property. (It’s a technical point, but an important distinction.)

2) The Singularity recedes as we develop new abilities.

Kelly writes, “The Singularity is an illusion that will be constantly retreating—always ‘near’ but never arriving.”

This statement is spot-on. As we amplify our collective intelligence (IA) at an accelerating rate and develop new capabilities we get better at peering ahead. The implication is that we co-evolve with technology and information to do so, assimilating intelligence along the way. In such an IA scenario, there simply is no dichotomy between us and it. It’s a we.

While Kelly alludes to IA in his World Wide Computer statement, he could bloster his argument by stressing the connection between human, informational and technological evolution and development.

(For more on this, check out this Future Blogger post by Will.)

3) Imagining a sequence of scenarios doesn’t take into account system dynamics. Thinking machines must co-evolve with the environment in order for intelligence to be meaningful.

“Thinking is only part of science; maybe even a small part,” points out Kelly. “Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems. There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears.”

Continue Reading