A Shift in Focus
Okay, so I'm deviating a bit—making a bit of a life change. For the last several years, programming has been a huge part of my life. It’s a very reason-heavy task, requiring abstraction and pulling yourself away from the real world.
There are many ways to look at the world, and one way is to think of people as operating in two modes: as individuals or as parts of a larger machine. Of course, we are both, but some people tend to lean more one way than the other.
My Journey in Tech
I’ve taken a deep dive into the world of tech, AI, gizmos, and gadgets—connecting this to that, making things work in new ways. And I absolutely love it. I couldn’t be happier with the direction I’ve taken in my career. I love tinkering with new things, playing with possibilities, and seeing what these innovations can bring to the world.
But I’ve decided to make a bit of a change. I’ll still continue my career in tech, but for now, I won’t be doing any extracurricular programming. Instead, I’m reconnecting with a part of myself that I left behind a while ago—my musical side.
Returning to Music
I used to write songs, play music, and perform, and I loved it. When I played, I felt connected to humanity in a way that’s hard to explain—like I was a conduit for creativity. Not that programming isn’t creative, but it’s a different kind of creativity. With music, I can share it with anyone. Sure, programmers can share code, but that’s a rare breed. Music, though—it speaks to everyone.
The Plan Moving Forward
So yeah, that’s a little update on where I’m at. My plan is to open up my YouTube channel, get some decent recording gear, and maybe, once a fortnight, put something out there. At the same time, I’ll work on writing music that really speaks to me.
For the first time in a long time, I feel genuinely excited about this. Not that there’s anything wrong with the life I live—I love it. But variety is the spice of life, as they say. So here’s to what’s next.
So, I just watched Donald Trump's inauguration as the 60th president. It was quite scary, though in a different way compared to the first time. Now, for full context, this isn't about right and left politics. As a matter of fact, before I say anything else, I need to mention that I think those divides need to be torn down because people are not in one camp or the other. They're usually a blend of these things, perhaps with a particular lean, but mostly a blend.
Those divides might be useful for statistical purposes—to determine if a country is more progressive or more conservative, sure. But on an individual level, they're just terrible. Listening to this inauguration and the toxic rhetoric coming from this man was terrifying. He's still talking about how Biden's administration stole the election. Four years later, he still can't accept that he lost. Somehow, that particular election was rigged, but not the two that he won, which seems pretty unlikely.
And then there was his behavior—acting like a vindictive schoolyard bully, saying things like, "Can you see Biden doing this?" as he signed a slew of executive orders to pardon people who had physically assaulted others during the January 6th riots—riots that he incited four years ago. As crazy as it is that he has been elected president again, I don't think all of his policies are terrible. Some of them, particularly his foreign policy, are very disruptive and could probably lead to meaningful changes around the world.
As I said, this isn't about politics—at least not politics in the way most people talk about them. I don't want to talk in terms of left and right. I don't care about that. I'm fine with a "conservative" leader in office. But someone as divisive as Trump, someone who enforces these boundaries around large swaths of the American public, I just can't see that being a good thing. He's very dangerous. So, in four years' time, when he has to concede power—or perhaps gets in for a third term—I'm genuinely scared to see the direction that republic is heading. It seems to be falling into an oligarchy, where the elites rule over everybody else and revel in it.
There seems to be a lot of politically uneducated people in that country. How can they just be okay with the blatant, pathetic, non-reflective rhetoric that Trump spouts? Place him next to someone like Biden, and you can clearly see that Biden wasn't much better in certain aspects. He wasn't a great president by any measure. But place Trump next to someone like Obama, and the difference becomes glaringly obvious. Obama's way of speaking, his ability to address people, and his competence in maintaining order—even if he was just another politician telling some lies and some truths—made him far more qualified to lead. He represented a semblance of protection for the Constitution and the principles America was built on.
Donald Trump does not do that. He protects himself. He comes before the Constitution. He comes before the American people.
I guess it's only a matter of time before we see the emergence of a new world superpower. Or perhaps the superpower of Trumpists and Trumpism. Maybe I'll get deported to the U.S. to be stoned to death for writing this post. Until then, good luck.
Introduction
Hello friends. Welcome to my blog once more. I want to share a couple of thoughts that I've been having about AI and how it's poised to potentially take away a lot of jobs and create redundancies across a large range of the population. It was a long time ago when developers such as myself thought that we would be last in this race to the bottom.
This is one of those times that looking back with hindsight, it's very difficult to understand why we thought that now. It seems very clear that intellectual work would be the first to go. We always assumed that simple tasks like picking up a parcel and moving it to another spot would be easily automatable. In some ways, this is true, but not really with pure artificial intelligence. This is more of a robotics-type problem to solve. This all being said, I have had some existential fear with the recent unveiling of OpenAI's o3 model and how they are able to reason and absolutely smash the ARC-AGI benchmark.
Artificial general intelligence, of course, is the real game changer as it would be able to solve all sorts of problems—novel problems that humans have yet to solve. This is a fantastic milestone and somebody in my position definitely welcomes this new technology as it could save countless lives and change the future for the better. As with any new technology, however, it is also capable of making things worse. The full story, it would seem—as has been proven time and time again in the past—is that it will be a bit of both. Or a lot, in this case. I am hopeful we will be able to mitigate a lot of the risks and move into the future in a positive way.
My particular concern is losing the ability to work in this industry and all of the years of skills I've developed over time. I love working on these kinds of things and I would hate to be replaced by the very technology I sought to create. Oh, the irony.
Why would a business owner need someone like me in a post-AGI world anyway? All they need is a computer, after all, and they can just ask it to solve a problem or to build a codebase or to build them a website, and it just goes. It may take hours to start with and cost thousands of dollars, but this will go down over time. It's eventually going to get to the point where it crosses some threshold where it is more feasible for an employer to have one of these subscription services than to hire someone like me to do the job.
So, I've recently had some thoughts that somewhat put me at ease. I'm still trying to rationalize my way through the issue, but I think, at least in the short term, there are some considerations that we can entertain. That being said, we are dealing with the future here and therefore it's very difficult to predict with any real accuracy. So take everything I say here with a grain of salt.
Trust
The way things are going to go is that some new technologies are going to pop up and others are going to pop off. It is very unlikely that anything that I try to predict is going to be that accurate, but here it goes. Companies and businesses are built on trust among the members of a given business.
The reason that a key stakeholder will ask someone like me to do something for them is because they trust two things.
- That I am competent at what I do.
- That I have their best interests in mind.
So that's an interesting concept—the concept of trust. Now, what kinds of things can you trust? What is trust? Basically, it comes down to assuming that another entity will behave in a predictable manner which is in harmony with your own interests. I have enough experience with ChatGPT to know it cannot be trusted, as it frequently hallucinates and makes mistakes. So when I go to ChatGPT and I ask it to solve a problem for me, I don't trust it. And the reason I don't trust it is because I know that it's fallible, and that it's a machine.
The fallible part presumably is going to become less of an issue as time goes on. It could conceivably get to the point where ChatGPT is never wrong. Even then, we now have to deal with the fact that a key stakeholder is not always going to be able to articulate what they want. They might think they need X, when really they need Y. The XY problem, which is a topic for another day.
In this scenario, the model goes and does what it's supposed to do, but it doesn't actually solve your problem. It solves some problem you don't actually have, and you're stuck in this infinite loop where you can't quite get what you want out of the AI because you don't really know what you want, but you think you do.
In those situations, you're going to need somebody that you trust to bridge the gap between you and the AI—somebody that you go, "Hey, I know that you have my best interests in mind and you understand how this underlying technology that's been written by these machines works." And they can go in there and find the exact spot where the reasoning's wrong.
Then the developer will say, "Oh yeah, this is actually what we mean to do. Let's change this part of it." They can even work with this AGI or whatever to make it work. But my point is, I think people in positions of making key architectural choices are fairly safe in this regard.
But how will the entry-level developers that don't yet know anything fair? Unforuntely, that area will grow smaller and smaller which seems to already be happening. It's going to get much harder to get real world experience. I suppose this has happened in other fields before, so I would be interested to hear about any paralells that can be drawn. But making architectural decisions and really high-level movement of direction in codebases is still going to have to have some trusted human entity that can bridge the gap between key stakeholders and the AI.
So this really raises the key issue here: Can you get to the point where you trust an AGI? Say it gets things 99.99% right. But that 0.01% of the time that it's wrong, you can't do anything because you don't know anything about the domain of development. You need someone that knows something about their own development now.
And the second scenario is: say the AGI gets things 100% right all the time. Like I said before, you are still a fallible human, asking the wrong things sometimes, not knowing what you want. Maybe you ask for a website to be built that can take users' credit card details, when really what you actually needed was a payment gateway that already exists. But you went out and built this thing that actually exposes dangerous information to the internet because you weren't specific enough with the AI about what you wanted.
You just aren't going to be able to—nor should you be expected to—provide adequate information to the AI to get it right 100% of the time. Will there ever come a time where we can trust the AI to make all of this work without the help of human beings? This question really interests me because I do believe that using silicon, copper, and whatever else you use to make circuits, you should be able to achieve, using a Turing complete machine, a conscious entity. Particularly because I think consciousness is actually the key factor when it comes to trust. You can trust something that you believe feels. You can trust something that you believe has conscious thoughts.
You can trust a cat, but you can't trust a robot vacuum cleaner. In some sense, you know the robot vacuum cleaner is going to do exactly what it's told all the time, so you can trust it in that way. But can you trust that it's going to come home when it goes missing?
I don't know if this analogy works here, but my point is, I think largely trust between humans in a business works because you trust that they are like you, that they have conscious experience as well, and that they are aligned with your values. And at the moment, with where we're going with AGI and AI, we are not developing anything that has emotions.
We are not developing anything that has the best interests of anyone but perhaps OpenAI in mind. And for that reason, high-level developers, I think, are going to be required in the near future.