Let’s all chill out. AI is a promising new technology, but the conversation isn’t always genuine, and it’s generating more heat than light.
It’s obvious there’s a lot of potential in something like ChatGPT, but those building products with it would like nothing better than for you, potentially a customer or at least someone who will encounter it, to think that it is more powerful and less error-prone than it is. Billions are being spent to ensure that AI is at the core of all manner of services — and not necessarily to make them better, but to automate them the way so much has been automated with mixed results.
Not to use the scary “they,” but they — meaning companies like Microsoft and Google that have an enormous financial interest in the success of AI in their core businesses (having invested so much in it) — are not interested in changing the world for the better, but making more money. They’re businesses, and AI is a product they are selling or hoping to sell — that no slander against them, just something to keep in mind when they make their claims.
On the other hand you have people who fear, for good reason, that their role will be eliminated not due to actual obsolescence but because some credulous manager swallowed the “AI revolution” hook, line, and sinker. People are not reading ChatGPT scripts and thinking, “Oh no, this software does what I do.” They are thinking, “This software appears to do what I do to people who don’t understand either.”
That’s very dangerous when your work is systemically misunderstood or undervalued, as a great deal is. But it’s a problem with management styles, not AI per se. Fortunately we have bold experiments like CNET’s attempt to automate financial advice columns: the graves of such ill-advised efforts will serve as gruesome trail markers to those thinking of making the same mistakes in the future.
Read More at TechCrunch
Read the rest at TechCrunch