In a couple of recent podcasts Naval Ravikant spoke about technological singularity in a way that I believe to be misguided. While Naval’s insight into other topics was brilliant, he appears to have the same misconceptions about technological singularity that are so commonly seen today among the tech community. In this post I will try to clear these misconceptions, as well as articulate my own view on the matter.
The misconceptions that I spoke about are as follows:
- Technological singularity depends on A.I.
- Technological singularity speculations are a result of Moore’s law.
- Technological singularity is still 1000 years away.
- The people who speak about technological singularity are all geek dreamers who think it is utopia.
But before we delve any further into the discussion, a definition of technological singularity is in order. This already is a controversial topic, since even Wikipedia has the definition wrong.
Technological Singularity is a stage of advancement of technology and civilization at which the rate of information growth exceeds the capacity of individuals to comprehend the changes that occur due to its influence.
At this point most people jump to the conclusion that a general A.I. is required for humanity to reach such a stage. I would like to present an argument that no such requirement exists. And to make it more fun I’ll even throw in a bunch of pictures.
When thinking of technological singularity, this is the graph that immediately comes to mind for most people who are familiar with the concept. A visualization of the exponential growth of the number of transistors in a computer (which no longer holds and has now been replaced with simply their performance), the famous Moore’s Law. But as far as singularity is concerned, this is a faulty metric. The reason for that is that processing power of computers has nothing to do with understanding technological and social changes that are occurring in the world. What really matters is the information stored on those computers.
This graph is much less known. It shows the rate of doubling of all of human knowledge over time. This is also exponential growth, meaning that every time a doubling occurs, it is the doubling of the previous value (following the pattern 1, 2, 4, 8, …). According to Buckminster Fuller, knowledge used to double once every 100 years until the year 1900. After that, with the invention of the printing press, and the industrial revolution, knowledge began to double every 25 years. Sometime around the 80s the rate of knowledge doubling has reached an astonishing number of 13 months. The rate of knowledge doubling today is unknown, but with widespread internet usage and increased communication between people in different parts of the world it has undoubtedly increased by a significant amount. So what does any of this have to do with technological singularity?
This is Gottfried Wilhelm Leibniz. He lived in the XVII century, wore fabulous wigs, and knew everything there was to know at his time. That’s right, everything there was to know. Which isn’t much by our standards, since in our days even middle school kids know a lot of stuff that Leibniz did not. But I digress. The point is that now there is not a single person in the world that would possess all of human knowledge. The rate of doubling of knowledge has exceeded human capacity to accumulate it. This is not technological singularity yet, as accumulation of knowledge is not the same thing as comprehension of its impact on society, but it’s getting pretty close. Let me give you another example.
This is a stone axe. It is little more than a sharp rock tied to a stick. In times when these were used, i.e. in the Stone Age, every stupid fucker out there knew how to make them. As technology progressed, fewer and fewer people had the necessary skills to craft the ever more sophisticated tools. Only the most skillful artisans could craft the finest goods. Then the industrial revolution came along and things began to change.
This is a steam powered locomotive, a product of the industrial revolution and one of its most famous symbols. The difference between it and the axe is not only the level of sophistication and technological advancement required to produce it, but also the fact that it was no longer made by a single person. The economic benefits of the division of labor had incited people to focus on creating only certain parts of the product, instead of doing the entire job from start to finish. There still were people who understood all the theoretical principles of building a steam locomotive and could in theory make one on their own, but it was no longer optimal to do so. And then the information revolution happened.
This is a modern smartphone. And there is no single person in the world that knows all the intricacies involved in its production. The person that makes the camera does not know how to make the processor and vice versa, both of them have only a vague understanding of how the screen is produced, and the kid who assembles it all doesn’t bother learning about any of that stuff. The entire details of its production are a mystery to everyone.
In the case of consumer electronics, the geeks can still stay on top of the information curve and learn about new gadgets more or less as soon as they appear. The same can not be said for specialized equipment, which remains mostly unknown to anyone who does not specialize in the relevant field. The same can be said about niche art and scientific discoveries − only a few can stay informed on their subjects of interest, and none can handle learning about all the information that happens to be in other fields. This is what the technological singularity is really about, and we are already in its early stages. So what happens next?
As the rate of doubling of knowledge increases, fields of specialization are going to become even narrower, and it is going to get harder and harder for anyone to make sense of the world. A General A.I. can undoubtedly make things even more difficult to comprehend, but it is by no means a prerequisite for technological singularity. In my opinion we are going to reach a state of full blown technological singularity within the next 30 to 50 years. I do not know what exact technologies are going to be involved in its emergence, but if I were to bet on something, I would choose direct mind communication, telepathic networking, and the global brain. That really cool stuff that people like Elon Musk and Mary Lou Jepsen are working on.
But technological singularity is not something that we should be in awe about, it is a serious problem that we need to resolve. Unlike what the doomsayers preach, it is not a direct threat to our survival, but rather it is a threat to our sanity. As less and less of the world makes sense to anyone, making any sort of informed decision in this world is going to become increasingly difficult. And the amount of decisions that need to be made is also going to increase. It is no coincidence that levels of stress among city dwellers have been consistently increasing over time. As we progress towards technological singularity, these problems are going to get increasingly worse, and individuals in their current state will no longer be able to function normally. The solution? A new type of mindset needs to emerge, one that is inherently more resistant to these problems, and one that can transcend fields of knowledge, allowing it to make sense of the ever changing existence. But that is the subject for another day.