Computer technology has not yet come close to the printing press in its power to generate radical and substantive thoughts on a social, economical, political or even philosophical level. The printing press was the dominant force that transformed the middle ages into a scientific society. Not by just making books more available, but by changing the thought patterns of those who learned to read. The changes in our society brought about by the computer technology in the past 50 years look pale compared to that. We are making computers in all forms available, but we're far away from generating new thoughts or breaking up thought patterns.
A few recent podcasts that caught my interest:
- When Your Side Project Blows Up with Dawson Whitfield of Logojoy. Logojoy is an online logo creator that uses machine learning to create the perfect logo for you. "Meh, this isn't AI," you might say. Doesn't matter — it has the same disruptive impact. Can you imagine when machine learning can create your website for you?
- a16z Podcast: AI, from ‘Toy’ Problems to Practical Application. The VC-backed tech startup world is focusing on operationalizing technology that already exists.
- Managing Procrastination, Predicting the Future, and Finding Happiness – Tim Urban. Normally, Tim Ferriss bugs the heck out of me. Tim Urban on the other hand (founder of Wait But Why) is like an intellectual Elon Musk. Part of their conversation discussed the long view Urban takes on the topics he writes about — "all the way back to the Big Bang." Given the long view, and how much society changed in the last 50 years, it's easy to see how bad humans are at predicting change.
"But Daniel, none of these links are conclusive proof that AI is coming."
No, the AI apocalypse hasn't happened yet. But, the pace of change is stunning and we can't reliably forecast more than six months out (if that).
The project is being guided by the artificial-intelligence researcher Sebastian Thrun, who as a Stanford professor in 2005 led a team of students and engineers that designed the first winning entry in an autonomous vehicle contest organized by the Pentagon’s Defense Advanced Research Projects Agency.
Since then, Dr. Thrun has focused more of his activities at Google, giving up tenure at Stanford and hiring a growing array of experts to help with the development project.
In frequent public statements, he has said robotic vehicles would increase energy efficiency while reducing road injuries and deaths. And he has called for sophisticated systems for car sharing that, he says, could cut the number of cars in the United States in half.
“What if I could take out my phone and say, ‘Zipcar, come here,’ ” he asked an industry conference last year, “and a moment later the Zipcar came around the corner?”
Google Lobbies Nevada To Allow Self-Driving Cars. The robots are coming.
Inferring intent on mobile devices. Brainstormed list of many different ways context could be discovered on a mobile device.
This came to me, as many ideas do, at 5 o’clock in the morning:
Google Earth now has Street View. To many, it comes as no surprise; Google has many web properties which will work quite well in unison once they are integrated. With the addition of Street View, though, the company is well on its way to creating a static, visual representation of the physical Earth. Furthermore, as technology progresses, a human’s ability to interact with this digital “environment” will be greatly enhanced. At an intersection in the near future, the boundary between the “analog” and “digital” worlds will be seemless. A person will be able to transition from one to another, with no conscious observation of the technology in between.
To quote my friend Shane out of context, the seeds for this future are already sown. Barring a complete collapse of civilization, it will happen and it will happen soon. Communication is taking the same route too, coincidentally enough.
One big question, however, is this: how do you make the “digital” representation of the “analog” environment dynamic and live? Is it a matter of everyone having embedded video sensors and GPS receivers which broadcast realtime to a connected web? Will the future be in nano clusters, swarms of technology which capture the environment for us? Or will there even be a need or desire move around in an “analog” world if we can manipulate far more in its “digital” counterpart?
Now take one step back. Who else in the blogosphere has already “created” these worlds, this idea? When will I be able to be aware, to be conscious of that knowledge without having to search?
AI is soon.