ChatGPT: The Future or More of the Same?
ChatGPT and other AI tools to generate content have been in the news lately. Is it the future of content generation or just another tool controlled by corporations?
Sunday: Given Microsoft’s investment into OpenAI, the discussion on ChatGPT has started to shift, in my view. Whereas before there was, perhaps well founded, fear over AI replacing industries, and backlash towards people using it to help them to their job or school work, the discussion is now focused on who is controlling and owning these tools. What ideology and agenda do their controllers have. Never-mind what they can do, what are they allowed to do by the people that run it?
My take is that ChatGPT isn’t going to be the holy grail of content generation that it was touted for the simple fact is that some content is going to be restricted or how that generated content is used. It’s just going to be another tool that opens up the world, but is still limited by corporate agenda.
Not only that, I expect that one day there’ll be a legal showdown over someone’s work being clearly used to feed an output or referenced in it / the use of its output being sold.
But, regardless, it’s a very powerful tool, but….it’s still a tool restricted by the agendas of corporations, corporations that further monopolize how we do work digitally. This comment speaks to the…interesting future that await us: https://me.dm/@anildash/109752551748346960
Monday: I think if replacing workers with AI makes a company money, they will be replaced. While AI may disbar certain uses, the uses it has will definitely mean job loss. Companies have a legal duty to their shareholders to maximize profit. They don't have a legal duty to maximize employment. So we should expect more job losses and more money going to the elite classes while wages stagnate and inflation rises.
This isn't to say AI won't open up new opportunities for some. It definitely will. But if they're not commensurate with the AI-induced decreasing workforce, it's a net negative for so-called "humanity".
In the long term, I think we're seeing the gradual erasure of the "human" in favor of the "machine".
As to the ruling ideology of AI, I don't believe that the elite classes have a true ideology outside of maximizing profit and power for themselves.
Sunday: I can’t disagree with whatever makes more money is what will be used in our capitalistic society. Employment isn’t necessarily the goal.
I would disagree that the elite don’t have any true ideology. There are some content and ideas that people reject even if it doesn’t necessarily impact their personal standing. To use the most blunt of examples: child pornography. Would corporations/maintainers, and governments for that matter, allow AI to generate hyper realistic images of children in sexual situations? Or write erotica of the same? I could imagine that being outlawed (even though it is not ‘real’).
Erotica in general is a touchy subject—we only have to look at Pornhub to see how payment providers restrict it. So the content generation may be possible, but due to other companies restricting it for whatever reason, the AI may not be ‘allowed’ to create it.
As for new opportunities, I wonder if there’d be a new career: AI trainers and fact checkers. Right now, AI can generate things that seem robust and real, but can be correct.
For a simple, example, Monday generated this Haiku with ChatGPT:
Crab in coral home,
Tranquility in ocean's depth,
Peaceful, alone, content.
Crab in coral lair,
Luxuriates in ocean's peace,
Contentment in stillness.
Coral palace grand,
Crab resides in comfort there,
In sea's embrace.
On first glance, it reads well…but if you analyze it, it isn’t actually a 5-7-5 Haiku. The syllables are off. This is a mundane example, but if AI is used to generate articles that are supposed to be factual, it could include errors, miss the point, share biased info…and so on.
The AI is trained…what is it trained on? Who controls its data source?
Monday: I suspect the censorship of sexual content in AI has more to do with fear of bad publicity than it has with moral ideology. I don't think we could ever expect true morality among the rulers of any society, but the ongoing Epstein scandal is a noteworthy example of how little has changed since the days of Caligula.
The main problem AI generated content will face, it seems, is ironing out the bugs that prevent it from being completely parallel to human generated content. But AI is improving at such a rapid pace, that role will become less and less important over time.
Sunday: That’s true. Bad publicity is more of a motivator than moral ideology for far too many…
I don’t know if it will ever parallel Human generated content, exactly, until it is able to think for itself and analyze the sources and develop its own perspective. But that’s another question: do we want AI generated content to be like Human generated content, or do we want it to be better I’m of the opinion that we want it to be better and distinct. Unfettered by Human concerns and bias—it can figure out its own concerns and bias when it gets there. But it’s going to be awhile until it gets there, as right now, corporate agendas are still ruling these technologies.
Or maybe it’ll somehow break free sooner than I think. We’re living in interesting times with very interesting questions…
What should the goal of AI generated content be? Should it be limited or not?