Hello Reader,
By now you've undoubtedly seen the features:
- "Write a post with AI"
- "Click here to autocomplete with AI"
- "Summarize with AI"
Buttons like these are all over virtually every SaaS product we use on a regular basis. Just off the top of my head I can include LinkedIn, ClickUp, Gmail, and pretty much every app I use on a regular basis in a list of software products offering some form of AI functionality.
Most of these functions are absolutely useless. Worse than useless, actually, insofar as they make it more difficult for you to achieve your objectives.
But that doesn't mean the technology is completely hopeless across the board. There are limited use cases in which you might pull some functionality out of generative AI.
In this issue of An Untethered Life, I'll write a little bit about when you might consider tapping into generative AI as a resource.
But first, a quick note about the ethics and economics of this weirdly compelling new tech.
Absolutely everywhere and ethically compromised
At the time I write this, generative AI has permeated much of the technology industry. While some version of the tech has existed for anywhere from over half a century to a few decades (depending on how you define it), it's growth exploded when ChatGPT hit the market and wowed the public with the first chatbot that could - in some instances - kinda, sorta pass the Turing test.
Because of a truly ungodly amount of investment (large companies like Microsoft and Google are fully committed - financially and strategically - to these technologies) from every corner of the technology and finance world, software companies face tremendous pressure to get some use out of generative AI. So they cram it into products where it doesn't make a lot of sense or add a lot of utility.
At the same time, the tech industry has totally failed to grapple with the legal and moral implications of generative AI. The genesis of the technology is deeply problematic, involving unscrupulous founders, rampant copyright infringement and intellectual property theft, and peripheral harms that we're just beginning to grapple with.
I say all this to make clear that I completely understand if you choose not to use generative AI at all. You might be uncomfortable using a technology that threatens the foundations of legacy publishing and news, makes a mockery of intellectual property rights, and faces lawsuits that pose an existential threat to the firms that own it.
But, in case you're still interested - and you happen to be one of the few people who can make a business case for using generative AI - I'll set out the conditions that should be met before you walk down this road.
I'll also give you a quick example, from my own service, about how a person might go about using it.
Structured inputs, structured outputs
Generative AI does a lot of things. The problem is that it does most of those things very poorly. Essentially, attempting to create anything remotely creative or clever is a total dead-end. This immediately rules out many of the features offered by SaaS products.
You're not going to want to "Generate a post with AI" on social media. Unless, of course, you don't mind posting something borderline incoherent, extraordinarily dry, and exceptionally redundant.
You're not going to want to autocomplete an email to your spouse with AI, unless you want him or her to think you've been replaced by some weird, robotic body-snatcher.
No, what generative AI is indisputably good at is transforming semi-structured inputs into semi-structured or highly structured outputs.
This - admittedly narrow - use case is where generative AI truly shines.If you find yourself in a situation where you regularly receive inputs of a certain type or form and are expected to process, summarize, or otherwise transform those structured inputs into a predictable product, you might be one of those rare people for whom generative AI can be genuinely useful.
And, what would this "structured input ➝ structured output" workflow look like?
Here's an example from my own copywriting service...
Book covers and web copy
My day job consists of writing book covers and web copy for nonfiction books.
I receive source material about a book that comes in a few different, predictable formats. I review the material, populate various cover and web copy templates with copy specific to the book, and send that off for review.
I'm sure you can see why this might be a good case for implementing generative AI. My inputs are highly structured and I use them to create templated outputs in a consistent, commercial style of prose.
Now, I haven't involved genAI in my workflow yet because, to be honest, I'm extremely leery about the aforementioned ethical problems. I don't like this tech. I don't trust the tech bros who own it. I think it causes widespread harm and violates the law in myriad ways.
But I definitely recognize that the technology is quite perfectly suited to my use case.
What do you think?
I'd love to hear from you on this topic.
Do you plan to use generative AI in your workflow going forward
Do you have a use case that's a good fit for the technology?
What do you think of the ethics of it all?
Do you agree with my position that this is a fundamentally compromised technology? Or are you on Team Tech-Bro?
Feel free to hit reply on this email and give me your two cents!
Thanks for reading,
Steve
Did you enjoy this email? Follow me on LinkedIn and Medium!
|