Introduction
Hello friends. Welcome to my blog once more. I want to share a couple of thoughts that I've been having about AI and how it's poised to potentially take away a lot of jobs and create redundancies across a large range of the population. It was a long time ago when developers such as myself thought that we would be last in this race to the bottom.
This is one of those times that looking back with hindsight, it's very difficult to understand why we thought that now. It seems very clear that intellectual work would be the first to go. We always assumed that simple tasks like picking up a parcel and moving it to another spot would be easily automatable. In some ways, this is true, but not really with pure artificial intelligence. This is more of a robotics-type problem to solve. This all being said, I have had some existential fear with the recent unveiling of OpenAI's o3 model and how they are able to reason and absolutely smash the ARC-AGI benchmark.
Artificial general intelligence, of course, is the real game changer as it would be able to solve all sorts of problems—novel problems that humans have yet to solve. This is a fantastic milestone and somebody in my position definitely welcomes this new technology as it could save countless lives and change the future for the better. As with any new technology, however, it is also capable of making things worse. The full story, it would seem—as has been proven time and time again in the past—is that it will be a bit of both. Or a lot, in this case. I am hopeful we will be able to mitigate a lot of the risks and move into the future in a positive way.
My particular concern is losing the ability to work in this industry and all of the years of skills I've developed over time. I love working on these kinds of things and I would hate to be replaced by the very technology I sought to create. Oh, the irony.
Why would a business owner need someone like me in a post-AGI world anyway? All they need is a computer, after all, and they can just ask it to solve a problem or to build a codebase or to build them a website, and it just goes. It may take hours to start with and cost thousands of dollars, but this will go down over time. It's eventually going to get to the point where it crosses some threshold where it is more feasible for an employer to have one of these subscription services than to hire someone like me to do the job.
So, I've recently had some thoughts that somewhat put me at ease. I'm still trying to rationalize my way through the issue, but I think, at least in the short term, there are some considerations that we can entertain. That being said, we are dealing with the future here and therefore it's very difficult to predict with any real accuracy. So take everything I say here with a grain of salt.
Trust
The way things are going to go is that some new technologies are going to pop up and others are going to pop off. It is very unlikely that anything that I try to predict is going to be that accurate, but here it goes. Companies and businesses are built on trust among the members of a given business.
The reason that a key stakeholder will ask someone like me to do something for them is because they trust two things.
- That I am competent at what I do.
- That I have their best interests in mind.
So that's an interesting concept—the concept of trust. Now, what kinds of things can you trust? What is trust? Basically, it comes down to assuming that another entity will behave in a predictable manner which is in harmony with your own interests. I have enough experience with ChatGPT to know it cannot be trusted, as it frequently hallucinates and makes mistakes. So when I go to ChatGPT and I ask it to solve a problem for me, I don't trust it. And the reason I don't trust it is because I know that it's fallible, and that it's a machine.
The fallible part presumably is going to become less of an issue as time goes on. It could conceivably get to the point where ChatGPT is never wrong. Even then, we now have to deal with the fact that a key stakeholder is not always going to be able to articulate what they want. They might think they need X, when really they need Y. The XY problem, which is a topic for another day.
In this scenario, the model goes and does what it's supposed to do, but it doesn't actually solve your problem. It solves some problem you don't actually have, and you're stuck in this infinite loop where you can't quite get what you want out of the AI because you don't really know what you want, but you think you do.
In those situations, you're going to need somebody that you trust to bridge the gap between you and the AI—somebody that you go, "Hey, I know that you have my best interests in mind and you understand how this underlying technology that's been written by these machines works." And they can go in there and find the exact spot where the reasoning's wrong.
Then the developer will say, "Oh yeah, this is actually what we mean to do. Let's change this part of it." They can even work with this AGI or whatever to make it work. But my point is, I think people in positions of making key architectural choices are fairly safe in this regard.
But how will the entry-level developers that don't yet know anything fair? Unforuntely, that area will grow smaller and smaller which seems to already be happening. It's going to get much harder to get real world experience. I suppose this has happened in other fields before, so I would be interested to hear about any paralells that can be drawn. But making architectural decisions and really high-level movement of direction in codebases is still going to have to have some trusted human entity that can bridge the gap between key stakeholders and the AI.
So this really raises the key issue here: Can you get to the point where you trust an AGI? Say it gets things 99.99% right. But that 0.01% of the time that it's wrong, you can't do anything because you don't know anything about the domain of development. You need someone that knows something about their own development now.
And the second scenario is: say the AGI gets things 100% right all the time. Like I said before, you are still a fallible human, asking the wrong things sometimes, not knowing what you want. Maybe you ask for a website to be built that can take users' credit card details, when really what you actually needed was a payment gateway that already exists. But you went out and built this thing that actually exposes dangerous information to the internet because you weren't specific enough with the AI about what you wanted.
You just aren't going to be able to—nor should you be expected to—provide adequate information to the AI to get it right 100% of the time. Will there ever come a time where we can trust the AI to make all of this work without the help of human beings? This question really interests me because I do believe that using silicon, copper, and whatever else you use to make circuits, you should be able to achieve, using a Turing complete machine, a conscious entity. Particularly because I think consciousness is actually the key factor when it comes to trust. You can trust something that you believe feels. You can trust something that you believe has conscious thoughts.
You can trust a cat, but you can't trust a robot vacuum cleaner. In some sense, you know the robot vacuum cleaner is going to do exactly what it's told all the time, so you can trust it in that way. But can you trust that it's going to come home when it goes missing?
I don't know if this analogy works here, but my point is, I think largely trust between humans in a business works because you trust that they are like you, that they have conscious experience as well, and that they are aligned with your values. And at the moment, with where we're going with AGI and AI, we are not developing anything that has emotions.
We are not developing anything that has the best interests of anyone but perhaps OpenAI in mind. And for that reason, high-level developers, I think, are going to be required in the near future.