There seems to be a never-ending discussion around AGI (Artificial General Intelligence), which is overshadowing a much more immediate problem with a much shorter timeframe. We spend so much time thinking about how a potential “intelligence explosion” might destroy existing social structures that we fail to recognize how this process is already well underway.
AI today is already the world’s most powerful tool for brainstorming—a cognitive amplifier. In the coming years (I don’t want to predict the future, as this tends to be an unreliable endeavor), we will witness an acceleration of the already visible skill and competence gap produced by the last technological revolution.
While this shouldn’t be surprising, I don’t believe we can fully comprehend the multiplier effect that economic factors will have on the issue, nor the self-replicating nature of the situation.
In a sort of vicious circle, our current intelligence explosion risks creating insurmountable levels of inequality due to the growing, non-measurable gap between a rising (and shrinking) intellectual elite and those in manual or unskilled trades. Pre-existing factors should, of course, be taken into consideration, but any narrative that tries to paint the situation as one of oppressors and victims is unproductive and doesn’t move us closer to solving the problem.
If Bauman is correct (in Liquid Modernity), the timing is even worse, as weakened post-modern social structures are poorly equipped to face this challenge. The last generation capable of making the necessary changes for a more equitable future experiences historical disengagement from politics and public life.
Solutions are needed, and I personally advocate for education. Democratization isn’t the issue here—we cannot democratize intelligence any more than we can make everyone a gold medalist—but we can ensure that our educational programs provide opportunities unhindered by economic factors, allowing for social mobility.
Better tool design is also by no means a guarantee, nor do I believe the development of these tools should be tied to absurd and shortsighted regulations, especially when AI is perfect for solving some of the world’s most pressing problems.
Not so long ago, a study suggested that Google’s use was reducing our cognitive capacities by creating dependence on mobile devices. The key here wasn’t to take down the search engine but to instill soft skills, such as discipline, in our population, allowing us to better channel a powerful tool that we can use to create a better world.
In essence, I believe we are facing the dilemma of the ouroboros, and the question is: How do we avoid eating our own tail?





Share what you think!