justinlillico.com

Jan 06, 2025

AI and The Future of Software Developers

Introduction

Hello friends. Welcome to my blog once more. I want to share a couple of thoughts that I've been having about AI and how it's poised to potentially take away a lot of jobs and create redundancies across a large range of the population. It was a long time ago when developers such as myself thought that we would be last in this race to the bottom.

This is one of those times that looking back with hindsight, it's very difficult to understand why we thought that now. It seems very clear that intellectual work would be the first to go. We always assumed that simple tasks like picking up a parcel and moving it to another spot would be easily automatable. In some ways, this is true, but not really with pure artificial intelligence. This is more of a robotics-type problem to solve. This all being said, I have had some existential fear with the recent unveiling of OpenAI's o3 model and how they are able to reason and absolutely smash the ARC-AGI benchmark.

Artificial general intelligence, of course, is the real game changer as it would be able to solve all sorts of problems—novel problems that humans have yet to solve. This is a fantastic milestone and somebody in my position definitely welcomes this new technology as it could save countless lives and change the future for the better. As with any new technology, however, it is also capable of making things worse. The full story, it would seem—as has been proven time and time again in the past—is that it will be a bit of both. Or a lot, in this case. I am hopeful we will be able to mitigate a lot of the risks and move into the future in a positive way.

My particular concern is losing the ability to work in this industry and all of the years of skills I've developed over time. I love working on these kinds of things and I would hate to be replaced by the very technology I sought to create. Oh, the irony.

Why would a business owner need someone like me in a post-AGI world anyway? All they need is a computer, after all, and they can just ask it to solve a problem or to build a codebase or to build them a website, and it just goes. It may take hours to start with and cost thousands of dollars, but this will go down over time. It's eventually going to get to the point where it crosses some threshold where it is more feasible for an employer to have one of these subscription services than to hire someone like me to do the job.

So, I've recently had some thoughts that somewhat put me at ease. I'm still trying to rationalize my way through the issue, but I think, at least in the short term, there are some considerations that we can entertain. That being said, we are dealing with the future here and therefore it's very difficult to predict with any real accuracy. So take everything I say here with a grain of salt.

Trust

The way things are going to go is that some new technologies are going to pop up and others are going to pop off. It is very unlikely that anything that I try to predict is going to be that accurate, but here it goes. Companies and businesses are built on trust among the members of a given business.

The reason that a key stakeholder will ask someone like me to do something for them is because they trust two things.

  1. That I am competent at what I do.
  2. That I have their best interests in mind.

So that's an interesting concept—the concept of trust. Now, what kinds of things can you trust? What is trust? Basically, it comes down to assuming that another entity will behave in a predictable manner which is in harmony with your own interests. I have enough experience with ChatGPT to know it cannot be trusted, as it frequently hallucinates and makes mistakes. So when I go to ChatGPT and I ask it to solve a problem for me, I don't trust it. And the reason I don't trust it is because I know that it's fallible, and that it's a machine.

The fallible part presumably is going to become less of an issue as time goes on. It could conceivably get to the point where ChatGPT is never wrong. Even then, we now have to deal with the fact that a key stakeholder is not always going to be able to articulate what they want. They might think they need X, when really they need Y. The XY problem, which is a topic for another day.

In this scenario, the model goes and does what it's supposed to do, but it doesn't actually solve your problem. It solves some problem you don't actually have, and you're stuck in this infinite loop where you can't quite get what you want out of the AI because you don't really know what you want, but you think you do.

In those situations, you're going to need somebody that you trust to bridge the gap between you and the AI—somebody that you go, "Hey, I know that you have my best interests in mind and you understand how this underlying technology that's been written by these machines works." And they can go in there and find the exact spot where the reasoning's wrong.

Then the developer will say, "Oh yeah, this is actually what we mean to do. Let's change this part of it." They can even work with this AGI or whatever to make it work. But my point is, I think people in positions of making key architectural choices are fairly safe in this regard.

But how will the entry-level developers that don't yet know anything fair? Unforuntely, that area will grow smaller and smaller which seems to already be happening. It's going to get much harder to get real world experience. I suppose this has happened in other fields before, so I would be interested to hear about any paralells that can be drawn. But making architectural decisions and really high-level movement of direction in codebases is still going to have to have some trusted human entity that can bridge the gap between key stakeholders and the AI.

So this really raises the key issue here: Can you get to the point where you trust an AGI? Say it gets things 99.99% right. But that 0.01% of the time that it's wrong, you can't do anything because you don't know anything about the domain of development. You need someone that knows something about their own development now.

And the second scenario is: say the AGI gets things 100% right all the time. Like I said before, you are still a fallible human, asking the wrong things sometimes, not knowing what you want. Maybe you ask for a website to be built that can take users' credit card details, when really what you actually needed was a payment gateway that already exists. But you went out and built this thing that actually exposes dangerous information to the internet because you weren't specific enough with the AI about what you wanted.

You just aren't going to be able to—nor should you be expected to—provide adequate information to the AI to get it right 100% of the time. Will there ever come a time where we can trust the AI to make all of this work without the help of human beings? This question really interests me because I do believe that using silicon, copper, and whatever else you use to make circuits, you should be able to achieve, using a Turing complete machine, a conscious entity. Particularly because I think consciousness is actually the key factor when it comes to trust. You can trust something that you believe feels. You can trust something that you believe has conscious thoughts.

You can trust a cat, but you can't trust a robot vacuum cleaner. In some sense, you know the robot vacuum cleaner is going to do exactly what it's told all the time, so you can trust it in that way. But can you trust that it's going to come home when it goes missing?

I don't know if this analogy works here, but my point is, I think largely trust between humans in a business works because you trust that they are like you, that they have conscious experience as well, and that they are aligned with your values. And at the moment, with where we're going with AGI and AI, we are not developing anything that has emotions.

We are not developing anything that has the best interests of anyone but perhaps OpenAI in mind. And for that reason, high-level developers, I think, are going to be required in the near future.

Apr 19, 2024

The Role of Artificial General Intelligence in Preventing Apocalypse Scenarios

In today's rapidly advancing world, the threat of an apocalypse looms large, with potential disasters stemming from multiple fronts: genetic engineering, pandemics, nuclear conflict, and artificial intelligence (AI) gone rogue. Each of these vectors presents a formidable challenge, demanding sophisticated solutions that could arguably be beyond human capacity alone. This is where the pursuit of Artificial General Intelligence (AGI) comes into play, promising not just advancements but perhaps survival itself.

Unpacking the Threats

CRISPR and Genetic Engineering: CRISPR technology has handed humanity the genetic scissors to edit life's blueprint. However, this powerful tool comes with the potential for unintended consequences, including the creation of new pathogens or irreversible changes to the human genome. The complexity of biological ecosystems and the high stakes of gene editing call for oversight that could one day be enhanced by AGI's computational power and predictive modeling.

Virus Manufacture and Biological Threats: The manufacture of viruses, whether for research or as biological weapons, presents a clear existential threat. Current biosecurity measures may not be foolproof in a world where technology is accessible and expertise widespread. AGI could help by designing more effective containment strategies, predicting outbreak patterns, and speeding up vaccine development through rapid simulation and testing.

Nuclear War: The perennial specter of nuclear war continues to cast a long shadow over global security. AGI could potentially manage disarmament processes, monitor compliance with international treaties, and even control nuclear arsenals with a level of impartiality and precision unattainable to humans.

AI Armageddon: Ironically, the very pursuit of AI could itself precipitate an apocalypse if control over superintelligent systems is lost. Developing AGI might seem like fighting fire with fire, but with proper safeguards, it could actually enforce stringent controls over lesser AI forms and prevent them from evolving unchecked.

Expanding Control and Developing Defenses: The Dual Pathways to Mitigation

Control Through International Cooperation: History shows us that control agreements can be effective. Just as the world has seen with chemical weapons and, to a lesser extent, nuclear weapons, international treaties can mitigate risks. The principle of mutually assured destruction has helped prevent nuclear wars so far, but it's a precarious balance. The constant threat of accidents or the actions of rogue leaders looms large, making this control only a partial solution. AGI could play a critical role by enhancing treaty verification processes, ensuring compliance, and managing de-escalation protocols during crises.

Advancing Defensive Technologies: The second approach to mitigating these apocalyptic threats is through technological advancements that counteract the risks. Just as rapid development of counter-viruses could neutralize biothreats, there needs to be a similar pace in creating defenses against nuclear weapons. For over sixty years, the world has lacked a reliable method to prevent nuclear attacks effectively. AGI could change this by accelerating the development of defensive strategies that are beyond current human capabilities.

AGI�s Role in Rational Decision-Making and Crisis Management

Imagine a scenario where a nuclear crisis is imminent. Here, AGI could provide highly rational, unbiased advice for decision-makers, potentially guiding humanity away from catastrophic outcomes. Furthermore, AGI could be tasked with developing systems capable of neutralizing threats in real-time, such as intercepting ballistic missiles or even safely redirecting them into space. This level of intervention would require an AGI with capabilities far surpassing anything currently available�an entity that combines deep knowledge of technology, human psychology, and strategic defense.

Conclusion

As we stand on the brink of potential global catastrophes, the imperative to develop artificial general intelligence has never been clearer or more urgent. AGI holds the promise of solving problems that are currently beyond human reach, acting as a guardian of humanity's future. By harnessing this potential responsibly, we could secure a safer, more resilient world for future generations.

Jan 11, 2024

Redefining Human-Computer Interaction: The Revolutionary Role of GPTs

In the vast expanse of technological evolution, the way humans interact with computers has been a constant study in innovation. From the clunky keyboards of the early computing era to the sleek touchscreens of today, each step has been a leap towards greater efficiency and intuitiveness. Today, we stand at the cusp of another monumental shift, heralded by the advent of Generative Pre-trained Transformers (GPTs). These are not mere tools; they are harbingers of a future where our interactions with computers become more natural and human-like than ever before.

For over six decades, since the inception of the modern computer mouse, our primary interaction with computers has been through graphical user interfaces. This longstanding reliance on keyboards and mice highlights a stagnant era in human-computer interaction. However, GPTs promise a seismic shift from these traditional interfaces. They offer a more intuitive, conversational, and context-aware interaction, akin to speaking with a knowledgeable assistant rather than inputting commands through clicks and keystrokes.

Unlike traditional search engines that rely on keyword-based queries, GPTs understand and process natural language, providing contextually relevant and conversational responses. This nuanced understanding of human language and intent marks a significant departure from the impersonal, list-based outputs of search engines. For developers, this opens a new realm of possibilities for creating user interfaces that are more aligned with natural human communication.

In this new era, developers are no longer just coding for functionality within the constraints of a graphical interface. Instead, they are designing experiences that are more akin to human-to-human interaction. This shift requires a new set of skills focused on natural language understanding and AI-driven design, pushing the boundaries of what's possible in software development.

Imagine booking a holiday or running a business automation through a simple conversation with your computer. GPTs make this possible. They can interpret your requirements, ask relevant follow-up questions, and execute tasks with a level of ease and understanding that traditional interfaces cannot match. This capability is set to revolutionize how we perform a myriad of daily tasks, making technology more accessible and efficient.

Looking ahead, GPTs are poised to become the heart of computer systems, translating user intent into actionable commands. This goes beyond text-based interaction; imagine a future where a camera interprets your hand gestures, or a microphone picks up your spoken words, and a GPT translates these into digital commands. This leap forward in human-computer interaction is not just about convenience; it's about augmenting human capabilities and freeing up our time and mental resources for more creative and meaningful pursuits.

The arrival of GPTs marks a new chapter in the story of human-computer interaction. As we move away from the confines of graphical user interfaces and towards a more natural, conversational mode of interaction, we unlock a world of possibilities. It's a journey from interacting with a machine to conversing with an intelligence that understands us. The potential of GPTs to transform our digital lives is immense, and the time to embrace this change is now.

Next → Page 1 of 2