Mediocre!

I don't really know how I feel about AI.
Well, I mean, I know that I don't like people calling it AI, because that doesn't seem quite right? I know the name resonates with a lot of people, and it's super effective from a marketing point of view, but it's not really intelligent.
It's just a machine that is really good at reproducing patterns.
Of course, that's kind of what I am, except I happen to be made of meat, so maybe I shouldn't be so pedantic.
Regardless of what it actually is, what everyone is calling AI is here and I don't think it's going anywhere, so I should probably try to clarify how I feel about it.
...
Okay, I've thought about it a bit and have a few thoughts, one of which I'm going to focus on in this blog post.
Speaking of which, this is obviously going to be a stream of consciousness, where I explore a thought that I just had through the power of writing. If you're expecting some sort of well-structured dissertation on a topic, you'll not find one here today.
Hopefully you'll still enjoy yourself though.
Anyway, it feels like widespread use of AI will force everything towards the middle, and I don't think I like that.
In my naïve understanding, AI takes all of the data that it was trained on, identifies the patterns in play, weights those patterns based on statistical analysis and then figures out which of those patterns to use whenever someone provides a prompt.
Obviously that means that AI is only going to be as good as the data it was trained on, which has led to sometimes hilarious things, like Tay, but also kind of raises some serious philosophical questions around whether or not you can trust what it says when you don't know what filters were applied to the data it was trained on.
Trust is a whole different thought though, so let's try to stay on topic.
Because AI is ultimately a pattern synthesis machine with an incredibly low barrier to entry, it gives people the ability to do things that they would simply not have been able to do without involving another person.
Or investing a bunch of their own time and effort learning how to do it of course.
That's great!
Allowing more people to make more things surely can't hurt, right?
Well, the obvious ramification is that the people who do have those skills, the ones that AI is providing, will be less in demand. Over time, that means there will be less of those people, because they will put their time and effort into other, more lucrative, pursuits.
So, the people who couldn't do a thing are now doing things, and the things they are doing are based off the statistically most likely thing that would have been done in the training data.
The people who were doing the things well are now less likely to do the thing, which means a proliferation of samey, mediocre outcomes.
But wait, it gets worse!
With the outstanding examples of a thing being created less and less, any future updates to the AI with fresh training data are now dominated by mediocre examples, examples which it helped to create, which means those mediocre examples get more and more weighting.
And now you have a mediocrity machine.
Funnily enough, I wrote about the idea of a mediocrity machine over a year ago, but it was as a side effect of a team that was constantly growing and never taking the time to actually understand things, leading to worse and worse outcomes.
Kind of horrifying to see it writ large like this.
The most concerning thing for me is that people won't care.
People are notoriously short sighted.
I mean, when you're mostly just focused on trying to survive, that's where you need to spend your effort, just getting through the day. Finding food, finding shelter, reproducing, all of those base things that are required for the fundamental continuation of the species.
So, a person will gladly just do the easy thing and people as a whole will make more and more mediocre content, feeding the mediocrity machine with no concern for the long-term effects.
I would love to be wrong.
Maybe people will use AI to create more but still apply rigorous quality checks to what is being produced when it comes to training the next version of the AI, taking only the best of the best forward into the future.
Maybe we'll get to a place where anyone can do anything with the help of an incredibly competent AI assistant and the aggregate effect will be a constant raising of the bar, ushering humanity into a new golden age.
I doubt it though, because people don't use AI to push the boundaries for the love of making something awesome.
They use AI because it's easier.
And easy is rarely good.
Member discussion