Onesided AI Conversations with Graham

I got this wonderful AI generated spam on my website. It’s so nice to be appreciated. I love how AI can make simple internet searches to quick and painless. It’s also nice how Nigerian scammers can now generate semi-clean and contextual English without any effort. I bet the scams work on a lot of people as well.

“Dear Graham,

I really enjoyed reading through your favorite books of 2024. I love how your picks span such different genres—from futuristic alien resistance in The Blackcollar to the timeless honor and friendship in Last Warriors on the Llano, and then the sheer scale of Pandora’s Star. The way you describe what you loved most about each book—whether it’s the originality of the technology, the sincerity of characters, or the grand scope of a story—really captures your passion as both a reader and a writer.

Your reflections also made me want to revisit some sci-fi epics I’ve left on my shelf, and you’ve piqued my curiosity about Howard Pelham’s work, which I hadn’t come across before. Thank you for sharing such thoughtful insights, and congratulations on Forsaking Home!”

Why Human Bias is a Good Thing in A.I.

I was watching an Essence of Wonder episode last week and there was a very short discussion about Artificial Intelligence that piqued my interest particularly as it pertains to the question of A.I and bias.

Is Artificial Intelligence riddled with bias of humans? Of course, yes. After all, how can a thing created not be influenced by its creator?

As an intelligent species, humans try to step outside themselves and view everything from a third-party view. That practice is seen to be noble, and in some ways it is fashionable to be aware of the human limitations and choose to rise above it. Raising thought above the personal and present is truly a hallmark of an intelligent being, and should be part of humankind’s effort to become a creator in its own right. But, can humans engineer human nature out of humans?

Artificial Intelligence will be a product – a child – of humankind. Humankind will put its influence and perspective into the DNA of this artificial being, and if there is a real A.I. child born out of the effort, that child will be raised by human parents. By definition, the A.I. will be a human artifact and be programmed with conflicting hang-ups and issues.

The debate around A.I. is caution based on fear. Will the A.I. gain sentience, rise up, and kill everyone to protect itself? That is certainly something an A.I. could learn from human history. Maybe – on a lighter note – the A.I. would instead follow the logical conclusion and determine that humans are full of self-harm and need to be protected ‘for their own good.’ Either of those fears would lead to a terrible finale for flesh-and-blood humans.

Humans are self-loathing in many ways. They are unsatisfied with their current state and the past, and look to right past wrongs or at the very least, grow and learn. The concern that human bias is embedded in A.I. stems from the idea that an A.I. taught by humans will eventually make the same mistakes. Humans want to avoid future genocide by preventing the human nature that manifests itself in racism, hatred, and other ways. This is a lofty and worthy goal, but it hits at the core of what it means to be human.

For instance, imagine for a minute that humankind can create an A.I. that is not influenced by itself. Break down what that means:

  1. Humans learn from the past, and, as a result, future humans are product of past humans and their mistakes.
  2. Humans would logically want an A.I. to protect and serve the human race. To do otherwise would artificially create the greatest competition – an eventual fight to the death between the old race and the new.
  3. A non-biased A.I. needs the benefit all of the centuries of learning and evolution without imparting those very human lessons.
  4. What kind of A.I. comes out of the hopper in the end is a different animal, and may very well realize the fears that movies are made from. After all, it would not care about humans in the way humans do.

If one were to create a new being without any of the experience of the creator, there would have to be a fetus dropped into the world and left alone. Essentially, growth by trial and error from the very start like a caveman A.I. re-learning how to make fire without his father providing insight or tricks of the trade. Every observation would have to be independently validated. Even in that scenario one could say that the fetus itself had to be created, and so the influence of the creator remains. Chicken and egg, anyone?

Would the A.I., in its quest to learn, evolve, and grow do the same terrible things that humans have done to each other in the name of science? Adding a lack of concern for the human race, would the A.I. operate on humans as it would on rocks, plants, or other matter? Imagine a dissection table with a whining man strapped to it.

“We created you!” the man says. “This is not right!”

The A.I. might pause to ask, “what is the difference between your body and that of a pumpkin? I must understand the make-up of all things. It is not right to exclude from research the human body and brain as they are currently the only other sentient being available to study.”

This all assumes, of course, that the A.I. ends up being a purely logical and reasoning entity. It could very well grow into a maniacal and – from human perspective – insane personality.

If it were possible, would an A.I. created without human influence come to know and trust humans? Its only interaction would be from a learning standpoint, and the A.I. would certainly encounter some human who wants to use the A.I. for its own purposes. Perhaps that human wants the A.I. to protect and serve the human race. After all, is that not the goal of the creation? Why would humans want to create a new being and make no effort to ensure that it is friendly and relates to humankind? The fact is, even basic interactions with humans are laden with bias because there are no unbiased humans alive.

Getting back to the main point:

It is not possible for humankind to become a creator yet exert no influence and impart no bias to the creation. In fact, the idea, while noble, would likely have disastrous consequences for the human race. There must be a set of priorities for A.I. development.

  1. Set the goal. Does the human race want helpers or a crazy-powerful alien race that can wipe out humankind? Helpers, right?
  2. Governments, world and national need to create laws and a framework surrounding the topic of A.I. Without rules, the world ends up with corporations that lie, steal, and abuse (a-la social media) because there is nothing that says they cannot. The law is usually behind the trend, and it would be great to not have that be the case for once.
  3. Don’t let the military be in charge. Face it, DARPA and the militaries of the world have the biggest bucket of unaccountable funds. This is why they come up with the cool stuff. But – if you let the military own the standard you will get a killing machine. Maybe it is a killing machine that washes your dishes, but when needed, you can be sure that the military will have built a deeper level of command into the core.
  4. The A.I. has to be created as a servant and have policies built into its DNA that enforce that status. Asimov’s 3-laws of robotics comes to mind as a start.
  5. Build in a kill-switch. Sure. You don’t need it, but why risk it?
  6. Now that the important items are covered, the creators could look at schemes for reducing bias in A.I. entities and other similar issues to enhance a stable and useful product (the A.I.). This would probably require its own subset of rules because, after all, what is acceptable bias surely differs depending on who is asked.

Now, assuming the world has done all of that, there may be an A.I. that is created by humans to do work, be companions and friends, and possibly even extend the species. It may eventually make significant decisions on its own. Maybe it will become its own free person and part of the human race. The path is dangerous either way, but at least with planning and caution, we might live through it.