Artificial Intelligence

The Possibilities

Artificial intelligence (A.I. or AI)  is causing a lot of buzz these days. It seems it makes sense to both be scared of it and positively excited. Scared because it may become truly ‘alive’ and replace us. Excited because it could free everyone up to pursue the higher aspects of humanity like art and philosophy. Maybe we will still need to work. In an office job context this may allow humans to spend more time being creative / employing their unique mental edges instead of doing low-thought tasks. In a blue collar environment, this can shift people away from dangerous occupations such as underground mining that has relatively high death rates from accidents. All in all, from an efficiency and productivity point of view, there should be more value created in an AI world than in our current world.

There is a big caveat though – this dramatic shift may not be positive for humans, it might be bad for us. Also, there is an implicit assumption that everyone has something productive to contribute, which is probably not true. In my point of view, just being human and existing warrants protection, but just existing does not imply that everyone is able to contribute. Some evolutionary thinkers claim our ability to mass produce food broke the link of needing to be ‘good at living’ to continue existing, such that food surpluses allowed for a mass of low skill people to exist. Instead of having them be engineers or political leaders, we have them work in factories doing one basic task over and over (or something equivalent like delivering Amazon packages). It makes sense as a logical thought that some people are near ‘useless’, but we need to leave it there at a general, high level. There should be no attempt to weed these people out now, that’s not the point. We’re all here, we all get to stay. That’s my take, but it’s not really what I think will happen.

The gloomier line of thinking goes something like if we don’t need the ‘capable’ people, then we certainly won’t need the less capable. We can let them die / disappear over time. When our methods of production no longer greatly rely on human input (physical, mental, any other way) I am skeptical that the future of the world is one with billions of humans in it. We have so many humans because we have needed so many humans to sustain every level of development we have gone through in history from pre-agriculture, to agriculture, to machinery, to electricity, and so on. More people has always equaled more output. Not anymore.

The Technology

I’m not really in a position to explain the technology, and that is not the point of this post – although of course brainstorming in a somewhat informed way requires a bit of technical understanding of the technology. I don’t think of AI as a person. I think of AI as giving an output based on an input – like a math function (which is kind of what it actually is). The things it gives out feel human because often they’re as good or better than what we would create ourselves, and it does it faster – like answers to questions or producing images. What is it really doing? It’s not thinking.

If we think about the search engine version of the Internet (like Google), what was it doing? It is really just a collection of web pages (collections of text with tags for structure) that reference each other and then Google was the function or black box. You want to know about smiling cats, so you type ‘smiling cats’ in Google, and Google became a pro at quickly searching through the text to find the tags with smiling cats and the pages that were connected to the ones with tags of ‘smiling cats’ and so on. Simplistically, it shows you something complete that already existed before you searched for it. AI is similar and it’s different.

It’s different in that when you type in a prompt and get back an answer, that answer was not a pre-existing piece of information on a pre-existing web page. It’s often a new, ‘creative’ response. The computer thought. Except it didn’t. This is an illusion. The novelty is entirely in how the information is structured. Now, text and data is linked based on relationships as humans use them. For example, when we think about a children’s school environment, certain items come to mind like toys, naptime, kids, teachers, crayons. There are all kinds of logical relationships we use that we take for granted. Like the fact that kids use crayons. Kids take naps. If a random person is asked about a daycare, in answering a question they’re basically accessing this network of relationships and providing a response that is based on / consistent with the network as it’s typically used by everyone else.

AI is doing the same thing. It’s just accessing that network of relationships to provide an output. Input then output. The AI model is a function much like Google’s search algorithms. The information is structured differently and the models are extremely sophisticated and complex, the kind of stuff only the most specialized, intelligent people can fully understand. Although it’s debatable that a human can understand an AI model at all, it’s too big for our brains. But to the extent we can understand it’s still hard, it’s really outside the scope of most people’s ability.

The Future

I digressed a bit to talk about the technology partly because I hoped to demystify it a bit and make it a bit less sexy to the average person. I think we get carried away (as individuals and as a society) with chasing the newest thing. Thinking the new invention has all the answers to human suffering. We were in the dark and now we are in the light. Wrong, we are pretty much in the same place as before. Actually, we may be worse off from a quality of life perspective if we think about what’s going to happen from here.

Like I said in the End Game section, the advent of AI may not be good for humans. Let me expand. AI models like ChatGPT are not much different in role than Google, but they are dramatically more efficient and powerful. They can spit out things that didn’t exist. They can link themselves to other softwares, software which can dictate what hardware does. Google couldn’t walk around, build a gun, find you, and kill you. AI can, in theory. We really are not that far from that kind of situation, especially with the huge progress in humanoid robot tech.

I find AI scary because I don’t really have a choice on it being developed or not – I guess not much different from how Americans in WWII didn’t have a choice about the government developing nuclear bombs. What happened? We developed the bombs in secret and then we used those bombs to obliterate humans. As is the case for many other technologies, AI is being developed by supposed altruists in Silicon Valley like Sam Altman and Elon Musk. Aside: most of this post was written in early/mid 2024, and what happened since then -> OpenAI is pivoting from social responsibility to being a for profit corporation, sparring it out with its partner Microsoft. It is abundantly clear that capital is the name of the game.

I will be clear that I do not believe these altruists. I do not believe Elon Musk or anyone else that says they are building the tech because they believe in a better future. Elon has said before that AI will allow for universal basic income. People that believe that are at best naive. I’ll believe that they are doing it because they can’t rein in their curiosity. They will develop it because they can. It’s a classic trope of so many science fiction movies. The mad scientist that sacrificed it all for science only to regret their actions in the end. So dumb. But what can I do? Not too much honestly.

Realistically, AI will be developed to its maximum potential, only limited by objective criteria like energy availability, hardware sophistication, etc. No panel of experts or safeguards is going to stop it, especially not with institutional capital backing the industry in search of the next hockey stick growth. It’s so clear but people don’t get it it seems. The American dream? Forget about it. We won’t need people anymore, so what edge do you have to lean on to go from poor to rich – totally irrelevant. My view is that those connections between haves and have-nots are going to be permanently severed. Those with capital today that own the AIs and the means of production will accumulate all the gains. Everyone else is useless, will get no free capital, and probably has a slim chance at revolution given the sheer difference in sophistication of power. Practically, my advice is build some wealth asap, otherwise it will be too late.

Is AI Alive?

I might expand on this one day, but my short answer is as follows. AI can’t count as life. Mapping relationships and providing an output doesn’t make it alive. Reaching the same destination does not make the beginning entities the same. Even if we grant that AIs are rational, rationality does not imply life, rationality is a feature of human life. Yes, rationality separates us from ‘animals’ as commonly used, but animals are just as alive as we are, and animals are more alive than an AI. I suppose this means I’m reducing being alive to having biological mechanisms, but I think that’s fine and consistent. Rationality and life, two separate things. The lines might get blurry later with cyborgs that are part robot part human, but we’ll cross that bridge later.

Consider the robot coffee makers popping up at airports. Both that robot and a barista put a cappuccino in your hand via some actions, but that doesn’t make the robot alive. We aren’t alive because of the things we do, we are alive because of what we are. Otherwise, all machines invented up to this point would in some sense be ‘alive’ in the way some claim AI is alive. A sewing machine is alive because it does things a human can do. It sounds silly because it is. The issue here is that we are giving AI too much hype because it is creating things in the realm of thought. We overvalue thought because it’s what makes us special vs. other life, but it’s been shown over and over that our brain, thought patterns, and ability to make unbiased observations is pretty flawed. We aren’t as great as we think. AI will do a lot of this better than us. We just need to say okay and accept that, not say that AI is alive.

Our Human Problem

Lastly, a short note on making the world a better place. Lack of AI or any other technology is not what is holding us back from a better world. People are selfish. Even with nice people, they are only a small percentage of the total population. For every person helping you there is one going against you, maybe more than one. I was a bit spiritually broken when I learned this my freshman year at Harvard. I had no objective of changing / fixing the world, but I thought I had sensible views about what it meant to be a good person. I was in some leadership extracurricular and we had a forum about social policy. I shared my views. I was pretty comfortable that they made sense and were by no means crazy or super off base. I was pretty shocked by some of the things people were saying. Things that clearly lacked respect for the lives of all people. Before college, I thought it was a matter that people were just not smart enough to know better. Nope. Some people just suck.

Let’s keep this in mind. AI is here to stay, and whether that is the technology that ends humanity or the next one or no technology ever ruins us, we will always still need to deal with each other, so let’s focus on that. Let’s promote the general welfare of human life.