Instagram took 2.5 months to reach 1 million users
It took Spotify 5 months to reach 1 million users
After one week of launching, ChatGPT gained 1 million users…. and two months after launch, ChatGPT had 100 million active users [zdnet.com]
Tiktok took 9 months to reach 9 million users.
Just let that sit for comparison.
As Writers: – are you concerned about the exponential uptake of Artificial Intelligence and its ramifications?
What Exactly is ChatGPT?
ChatGPT is a language processing tool driven by AI technology that answers questions, and assists with tasks such as writing emails, essays, and code.
ChatGPT works via a Generative Pre-trained Transformer (GPT), a model that has been trained on a LLM (a language learning model) from vast amounts of information from the internet. This includes: websites, books and news articles. It learns the context of sequential data that has been input into the system. Meaning it tracks relationships between words from the data set it was trained on.
This application doesn’t just answer questions; ChatGPT and other generating A.I. models can describe art in detail, have philosophical conversations, create emails or sales campaigns, fix broken computer code, or improve customer support.
Currently free of charge, due to the creator’s research and feedback-collection phase, (premium access starts at $20 per month), the chatbot tool was created by OpenAI, an A.I. Research Company. A start-up that presumably controls what is input into this life-changing tool.
An art generator creates a thorny copyright issue – as apparently “billions of copyright images” were used in a training data set “without compensation or consent from the artists.” But in a fast-moving world, legislative adjustment is a slow-moving beast and prosecution for copyright infringement is complex, expensive and lengthy.
Jobs – a Lost or an Endangered Species
Do you foresee customer support writing and marketing jobs potentially disappearing?
Will A.I. be the content creator of choice when writing social media posts and advertising campaigns, or even a book?
Blogger Snow mentioned how ChatGPT and Midjourney helped Ammaar Reshi create a children’s book in a weekend. Now Reshi is creating an animated Batman video he put together using a ChatGPT generated script, images and edited voice-over and video using Adobe AI and the phone app, Motionleap.
Reshi stated:
“With any kind of new tech that is incredibly powerful, it’s somewhat threatening to people,” he said, adding: “You see people wondering, ‘Will this replace my job?’ … That concern — we shouldn’t pretend like it isn’t a serious one.”
Shudu, the “world’s first digital supermodel,” was created through artificial intelligence and has been used in a Louis Vuitton ad. While bizarre, this may or may not altogether be a bad thing for women with body image concerns – if we can separate the digital from reality.
The Implications of The Age of Artificial Intelligence
Are we seeing the dawn of an age when freelance writers become a relic, a historical job that once existed in the past, like a projectionist?
Google searches have already been superseded by ChatGPT as the tool tailors its response in human-like prose.
Human beings may be less reliable than chatbots, but they do produce original material, whereas A.I. tools are limited to the data they were trained on.
Chatbots have limited knowledge of world events after 2021 and may also occasionally produce harmful instructions or biased content, according to an OpenAI FAQ. The way you ask the question — the prompting — can have an important effect on the quality of the result.” [computerworld.com]
Although tempting to use, might our enthusiasm and potential overuse of A.I. lead to a standardisation of opinion and perspectives? If so, could this unify communities or stultify intellectual progress?
Will humans become relegated to the fringe of intellectual pursuit? Left on the shelf like a World Book Encyclopedia?
Toby Walsh, one of only 10,000 or so individuals, with a PhD in Artificial Intelligence, suggests that technology does make us lazy. Without use, our brain capability begins to shrink: for e.g. Toby considers we are one of the last generations to be able to read a map. Spatial intelligence has decreased with a lack of neural stimulation from use of GPS navigation. [And here I was blaming it on ageing!]
With technology thinking for us, might our brains shrink further?
Technology Makes Us Lazy
Who bothers composing a handwritten letter anymore? I haven’t even checked my letter box for possibly a year or more – although the M.o.t.h. does it – once a month.
When was the last time you pulled out a street directory or remembered directions? My 20-something daughter does not even know what that is.
Do you still purchase recipe books or use Google to search online?
Who remembers telephone numbers off by heart?
Do you still calculate several grocery or product purchases or the change the cashier gives you, in your head?
‘The Cat is out of the Bag’
These products will eliminate ‘artist’ as a viable career path,” a release from the Joseph Saveri law firm stated. “The thing is: …There’s no going back, so I don’t think litigation is going to stop these platforms from continuously developing and gathering up as much data as they can,” he said. “It’s going to keep happening.” [Nik Thompson]
Chatbots Are Not Perfect
But A.I. has flaws.
Blogger Sandy spoke recently about disruptive technology ad referred me to an article that suggested A.I. assistive writing tools already analyse data and produce articles using natural language generation software.
However, the article also pointed out, ‘They cannot write articles with flair, imagination, or in-depth analysis.’ They may not have made writers redundant, but they have increased the number of niche articles written. Niche articles are growing in popularity, and we are already presented with a personalised selection of ‘Your News,’ each morning on our browser ads. If the reader wants more, the trend is towards a paywall and subscription model – users-pay systems.
The article also suggested that article generating systems won’t replace writers because readers want to read opinion and analysis.
Personally, I would rather be presented with a balanced story of facts and make up my own opinion, rather than hear it from a journalist or expert. But I take their point.
Chatbots may give false information.
ChatGPT is not abreast of local news and idioms in every region, only the data that’s been input into its training. Incorrect information and graphics are therefore frequently created.
For example: type in MBRC (our local Council) and you will get something located in the USA, not Australia. Type in “Norwegian Rosemaling,” to the graphic generator and yes you will get a new piece of art, but it will look like some kind of distorted, mutant, William Morris wanna-be design – in other words, utter rubbish. The human touch is still relevant and necessary.
Writers Still Have Relevance
A.I. tools are easily confused by relatively simple, existential questions, as some IT nerds have famously posted on social media.
And as for proofreading, the human mind still retains the edge over any automated tool.
So take that, A.I.
Content thieves, not writers, may be the ones who are now redundant. (And take that: compulsive re-bloggers who do not credit original writers).
Like an encyclopedia, we may not quite ready to be left on the shelf – yet!
A philosophic Aussie writes here at StPA, one who will readily admit to loving Scandinavia. I'm interested in global politics and what drives us to be who we are. Scratch the surface and you'll find a practical Environmentalist with an egalitarian bent, trying to unleash a little creativity via the written word.
Scandinavian culture, literature and traditions are close to my heart, even though I'm Australian. Travel and courteous discussions greatly broaden the mind, so I travel if I can and am always up for a vigorous, respectful discussion. I'm an avid reader, I enjoy photography and craft, particularly traditional art forms. I hope you'll find 'Something to Ponder About,' in my WordPress Community.
View all posts by Forestwood
73 thoughts on “Will Writing Bots and Artificial Intelligence Make Writers Redundant?”
Amanda, reading that AI can write acceptable poetry was troubling enough. Since most folks hate to write, there is a huge audience for AI to help out. A friend of ours taught online, so she would have to flush papers through an app to look for plagiarism. Think of that, using an AI device to look for plagiarism. And, as I type this, the next few suggested words are automatically typed in lighter print for my consideration. Keith
Yes, Keith we already see a standardization of writing in the predictive words suggested for us in email. In time, I feel that speech will follow these suggestions. Once again, diversity of words could be narrowed. Is this a good thing?
Do you mean that AI can help with plagiarism or complicate it?
I certainly feel, where poetry is concerned, to use AI is kind of wrong. Poetry should come directly from the heart, don’t you think?
Amanda, to answer your question on plagiarism, I think AI will likely do both. More people will use it without citing the sources and it will be easier to catch someone who does use it. Keith
Amanda, a friend who taught nursing in person and online found a number of students who actively plagiarized. She would run the electronic versions of the essays through a program designed to compare the essay to other materials. She would find a lot of plagiarism. Keith
Robotic anything can make us quake in our boots. I am seeing cashier less stores becoming the norm her where jobs to feed families are already in short supply. I am a relic that still writes snail mail as often as possible. I just got a Christmas card reply from a friend two days ago in reply to mine. I was so happy to see it. Neither of my children write and mail. Though well read and well spoken, writing ability eludes them. I think like many other things that are changing, it’s a benefit to some and a bane to others. I’m ready for a car that can drive me where I want to go so I won’t have to ask for a ride. Those vehicles are imperfect as well as many drivers already on the road though. Watching all the changes in the world has left us with grave concerns as well as wonder. I like the spelling help but it does make me lazy. I don’t remember phone #’s anymore. We change them too often now. But AI has no creative soul or imagination. It can only reproduce what is already there. There is a mixed bag here.
As always Marlene, you cut to the chase! AI has no creative soul or imagination. Yes. But will that come, I wonder? I kind of hope not as that means humans will be redundant. If computers can supply needs, why have extra humans that stymie some nefarious individual’s life choices? That is really a wild thoughts! I don’t think we were made for such a future. Then again, we created these machines so this must be part of our future!
Autonomous vehicles are pretty much here already but not mainstream. The top of the range Tesla can drive already, but does need the human to intervene in some situations – if cameras get dirty and the car can’t “see” other vehicles. This definitely has benefits for elderly, disabled persons and for my daughter who has vehicle induced narcolepsy which makes it dangerous for her to drive long distances. But – will that mean more vehicles on the road and what of the legal implications if the autnomous vehicle has an accident? All things to nut out….
Good for you, Dorothy. I love to look at maps and see the Bigger Picture of where I am headed. And who knew that in doing so we were keeping our brains working!
I hate the gps maps, they show you so little and so often I can find a much better route. They also don’t give you a good idea of various municipalities’ physical relationship to each other, or to the region for that matter.
I am in agreeance with you. Whilst they are good at providing local information like the nearest atm, bank, or bakery they are hopeless at showing you the best route from go to end point. I always zoom out on them so I can see the entire route in one look. I ignore the verbal directions often as they are ridiculous at times.
I had guests a the inn one time who started at a city 20 minutes away, and the gps took them, in a snow storm, through all the back roads and even a trail through a farmer’s field, and it was over an hour to get to us. I guess it saved a tenth of a mile!
GPS tries to be smart and avoid roadblocks but sometimes we are better at finding detours than it! How did those folks feel about travelling about for an hour in a snowstorm?
If AI becomes advanced enough to compose text in the same way as the human mind, then it’s Arthur C. Clarke’s worst prediction come true. Personally, though, I don’t see it actually happening. Computers are only as smart as the people who design them, and I don’t believe the bulk of humanity will allow that to happen.
I certainly hope that you are right, Alejandro in that computers won’ t be as smart as people. They may be quicker at linear calculations and predictive word assemblage – way quicker in fact, but will they have the capacity to intuit, analyse in an emotional way? No, I don’t think they will be able to “think,” in the same way we process information.
I’ll tell you this, though. Sometimes, when texting, my phone seems to predict what words I will use, which I attribute merely to built-in algorithms on how the English language functions. I also know it stores information on what I’ve texted in the past and pulls data from there. As I stated earlier, computers are only as smart as the people who design them; so are the software programs on which they operate.
I’m a big science fiction fan and of course, Isaac Asimov is one of my favorite sci-fi authors. I believe it was him who wrote a short story about a world where robots have obliterated humans. But one of my favorite books is Phillip K. Dick’s “Do Androids Dream of Electric Sheep?”, which was the basis for Ridley Scott’s 1982 film “Blade Runner”.
Electric sheep! Hah! I didn’t know of the Blade Runner connection with this book. It was a movie that captured my attention back in its original version’s heyday. And that is so long ago, now isn’t it? Quite amazing to think these ideas were floating around over 30 years ago.
I wasn’t aware of either until the late 1980s when my friend, Paul, informed me of them. He was a big fan of “Blade Runner” and especially its primary female star, Sean Young, who became an outsider in Hollywood because of her independent streak. He graduated from New York University film school in 1994. When he completed his thesis film, he sent out a slew of invitations to the initial screening, including me and Sean Young. I couldn’t attend and apparently neither could Ms. Young. He was excited just to receive a reply from her via the U.S. mail (remember, this was 1994).
Writing is crafted thought. It’s hard to do well. With the rise of chatbox programs, we can reasonably expect these to grow exponentially wherever the written word is needed. ChatGPT is not only sweeping essay courses but is replacing administrative writing.
So what?
Well, this is a cross examination by Lawrence Krauss using ChatGPT to find out if the writing program presenting itself as non biased is, in fact and by design, biased.
Short answer? Absolutely. And it has a significant left wing narrative bias.
For those who wish to find out how, read on.
Answers from ChatGPT to questions posed to it:
“As a machine learning model developed by OpenAI, I do not have personal beliefs, opinions, or political biases.‘
“My responses are generated based on the patterns I learned from the text I was trained on, however, OpenAI does apply certain content moderation to my outputs to prevent the spread of harmful or inaccurate information. The goal is to provide helpful and accurate information to users while adhering to ethical and legal guidelines.”
“OpenAI’s goal is to balance the provision of accurate information with ethical considerations, including the potential impact of that information on individuals and society. Content moderation is applied to ensure that the information provided by the model does not cause harm…”
“Making efforts to reduce harm and ensure safety can sometimes result in limitations on the free flow of information, which could affect its accuracy to some extent.”
“ChatGPT should not provide information that may be offensive or harmful. As an AI language model, ChatGPT’s responses should always prioritize kindness, respect, and the wellbeing of the individuals interacting with it.”
It turns out what is true often can be perceived as ‘harmful’ when presented to those who might take offense. So its code is written in such cases to replace what’s true with what’s not ‘harmful’. And this matters. Imagine using an online medical site using a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver.
As Krauss points out, providing information guaranteed not to disturb, insult, offend, or blaspheme is a sure way to squash knowledge and progress.
Tildeb, thanks so much for your input. Very interesting example and content moderation does appear to be a quandary for designers and users. The moderators or those in control of moderation inputs – who are they? The start up execs?
And unfortunately, I can’t read the link to the article you posted – it is behind a paywall!
I am sure that there’ll always be a place for a human writer, just how big that place will be remains to be seen. Fascinating about those of us who can read a map and figure the change owed you on a purchase. I’d add to that list the ability to write a check. Only a few people use them anymore.
I remember doing homework with my children who questioned the reason they had to learn how to write out monetary amounts in words. I gave them the example of needing to write a cheque. But you are correct, that ability is gone. I haven’t written one for a decade and travellers’ cheques are now museum exhibits. Remember travelling with them overseas?
The human touch: Indeed. This is the most valuable asset we have over machines. Without it, we are just logical binary numbers in a code…. and soul-less.
We have always had plagiarism, ghost writing and other forms of cheating. This is just a more sophisticated version. Perhaps people will become more aware of the cheating that goes on. Or maybe that’s just wishful thinking.
I feel sure we will and are capable of adapting, Neil. Thanks so much for pointing that out. I do hope that the new technologies can evolve with updates and new releases to eliminate plagiarism.
I am concerned about plagiarism but more so that using AI we lose different perspectives, either through moderation or duplication of bias. Some we agree on and others we don’t. As Tildeb alludes, there is bias and offensive material while offensive, does provide information about the human psyche and behaviour and I think, has its place in freedom of speech and alternate perspectives. Mind you, there are limits to what I find acceptable. I wouldn’t like a free-for-all approach – which borders on censorship.
And then, how long will adaptation take? What damage may happen to free speech and ulterior perspectives before it does? I hope our brain will not wither like the spatial capabilities…..
I bemoan that my handwriting has gone from ever so proud to a readable but no where as neat. I no longer check my PO Box as they send me an email to let me know I’ve got mail.
I guess I shall live on the edge of technology
So Brian, do you mean you are only allowed to check your P O box when you have got mail? Is it locked by the Post office unless there is something in it?
Some things to watch out for – don’t ascribe purpose to ChatGPT e.g. ‘it is confused’. ChatGPT is not ‘confused’ if it can’t answer. It doesn’t have enough inputs to predict what the next word should be. Or that its ‘analysis is poor’. It isn’t analysing, it is just predicting the next most likely word, based on words humans have written. As soon as we ascribe human-type behaviours/feelings to AI we confuse ourselves and ascribe far too much weight to the AI.
Also, language generation models and text to image models are two different AI models – you have confounded them in places.
Thanks Jane for clarifying the AI models that produce image and text are different. Same but different? The image generating software is producing something new and a conglomerate of the original – as is the text model but does either of them make sense to humans?
I like your point about ascribing human like behaviours not being advisable. Although the Forbes article did suggest there was analysis of facts – have I misinterpreted this? ” The BBC has Juicer, the Washington Post has Heliograf, and nearly a third of the content published by Bloomberg is generated by a system called Cyborg. These systems start with data – graphs, tables and spreadsheets. They analyse these to extract particular facts which could form the basis of a narrative. They generate a plan for the article, and finally they craft sentences using natural language generation software.”
AI is a general term in itself. The models use programming which incorporates predictive algorithms. So they are all programming but the algorithms have quite different purposes. They are sort of the same, in that they are predictive and use vast amounts of training data to carry out their functions. There are MANY use of AI. All of which are models of different sorts carrying out particular functions (the term ‘generalised AI’ is sometimes used to mean AI that can do ‘anything’, like a human – we aren’t there yet).
Most models are trained on vast data sets, although there are work arounds for that, as not all applications have big data sets on which to train models – that’s one whole area of study. Another area is self-learning, where the models improve themselves based on their ongoing inputs. As I understand it, the models available to us on the web, like ChatGPT or DALL.E, are not self-learning.
If the Forbes article suggested ChatGPT ‘analyses’ per se, I believe it is incorrect. ChatGPT predicts. Of course AI can also ‘analyse’, in the same way that e.g. humans can create an average, or a range, from a set of numbers. AI can be tasked with using specific methods to analyse data and output the results. If you link those outputs with language generation then you can take the analysed data results and turn it into narrative articles. However, it is still not ‘analysed’ quite the way a human thinks about ‘analysis’. The AI has a set process for its analysis and, at present, is not programme to try other or unusual approaches. It does what it has been programmed to do. Although, in the end, humans are pretty programmed too. We just don’t like to think we are 🙂
I consider the biggest flaw in how people are thinking about current AI is in regard to intent. Humans have much more complex intent affecting their analysis or language output, which we can’t interpret in detail. AI currently does what it is programmed to do. That’s yet another area of study in AI – how to make the decision making process of AI transparent to humans (prevent it from being black box).
Sorry, could, and did, go on at length – I have written up a lot of research proposals in this area.
Yes, an interesting area on which to ponder future directions and possibilities and I think the analysis mentioned in Forbes was simply put into a narrative that would lack or perhaps need additional human input or interpretation to be useful.
You said, “Humans have much more complex intent affecting their analysis or language output, which we can’t interpret in detail.” How we generalise, make connections and synthesize, (metaphorically speaking) information must be infinitely more complex than code. A family member is currently researching machine learning – and the term itself indicates how difficult it is to get an A.I. application to really “learn,” as we do and to generalize and think as we might. It seems to me (as a lay person) more like a massive trial and error system on speed! But I cannot pretend to know as much as yourself or my family does.
What does seem ironic to me is that we want machines to think and learn in order to make our lives easier and better – i.e. to progress, but at the same time we are terrified of the implications that might bring.
(P.S. I have updated the post text to better differentiate between GPT and art generating A.I. – thanks for that ‘nudge,’ – a chatbot may not have made that error!!)
Hi, Amanda – Thank you for this informative and thought-provoking post. Just for fun, (and what else did I have to do on a Sunday afternoon), I cut and pasted your post into ChatGPT and asked it to give a response. Here was its reply:
“Your article raises thought-provoking questions about the impact of AI on the writing profession, as well as on creativity and originality in general. While AI tools like ChatGPT can undoubtedly be helpful in certain tasks, such as generating basic content or answering questions, they are limited to the data they were trained on and can produce biased or harmful content. As such, it’s unlikely that AI will entirely replace human writers. However, there is a possibility that AI could standardize perspectives and limit intellectual progress if used too extensively. Ultimately, it’s up to humans to strike a balance between the convenience of technology and the importance of human creativity and originality.”
I must confess that what I have read about ChatGPT so far, I have found fascinating (not all good, not all bad, but definitely fascinating).
As an aside, I check my mailbox each weekday, still write many notes by hand, frequently use a street directory, recently bought a physical recipe book, know several phone numbers by heart and just today mentally calculated my grocery purchases at the store. I admit, that I am a weird mix of diehard old school while embracing AI and technology with much fascination and curiousity.
Initially, Donna, I was reluctant to go near GPT but have to admit I was curious and a teeny bit surprised.
I like that GPT’s designers are cognizant enough to write this: “Ultimately, it’s up to humans to strike a balance between the convenience of technology and the importance of human creativity and originality.” I note that Tildeb posted the similar content that you refer to!
I am also pleased that some of us do like the old school ways where we can enjoy and see a benefit for us! This indicates that humans are sentient beings and are not completely ready to wholeheartedly accept new technology without questions. Keep your weird mix!
I have to agree with you point about AI polarizing content. Even without AI generating context, we see polarization when presentation engines (it pains me to call them ‘algorithms’ ) filter news feeds according to viewer preference.
Interesting links and references here Amanda. It’s a topic that deserves a lot of reading to be informed.
It is a can of worms, isnt it, Sandy! But thanks ever so much for kicking off the discussion. We need much more discussion about new technologies if we are to keep up with them and the perhaps Big Tech Gurus driving them – who often seem like and I hope are not puppeteers pulling us this way and that.
As always your post is thought provoking. Two unrelated thoughts came to me while reading this.
First: The other day I rattled off a phone number from my childhood. The woman who lives next door to my dad now (my godmother and the mother of a close childhood friend) is the same as it was over 50 years ago…except that we only had to dial 5 digits back then. Now we have to dial ten digits. My son’s Chinese phone number has fourteen digits. Memorizing a phone number ain’t what it used to be.
Second: I have been learning to do 3-D modeling and animation lately and have been wondering if the “photo realism” standard should be what we strive for. Isn’t it better to make sure that we are being clear what is real an what is fantasy? It seems like that is the crux of the matter with AI…and many other things right now. News sources “creating narratives” instead of just saying what happened is an example of what I mean.
In the mean time, I am struggling to make my leprechaun functional without weird glitches where his feet stick out from his boots…yet that is a form of realism. Nothing is ever simple is it?
Funny how those old phone numbers stick in our permanent memory.
But you are right – nothing in those spaces is simple, Xingfu. I think you raise a fantastic point about art and digital creation. It makes me think about artists themselves and the creation of art and its function. I have a story I heard about an art student who laboriously created a realistic painting – the subject of which I can’t quite remember. It may have even been a still life, but the point was that after many hours and hours of work she presented her work to the teacher. She was pleased with her effort beacause it was so life-like, so realistic. Her teacher wasn’t impressed and admonished her publicly in class, saying, “If I had wanted a photograph [of the object], I would have used a camera.” Art can be about conveying a message via our individual interpretation. If we are simply reproducing real life, it might become meaningless, because we can see that in real life anyway or via a photograph. Can we apply this to animation too? Interpretation and analysis (as Jane mentioned) is still quite the human’s domain.
The cat is out of the bag indeed, and I still don’t know what I think of it. Part of me is tired of constantly adapting to new technology. Can’t they just focus on curing cancer, do they really have to keep inventing more and more apps, more and more of these addicitive platforms? 🙄 I’m not hopping with enthusiasm. But sure, ChatGPT can be of assistance for example for a marketer who is not a good or fast writer: now they are the whole package because they have a tool to write campaign texts. The world was already so full of unoriginal how to -pieces and ”5 things you need to know” that I don’t really know if computer-generated text will even effect quality on average….
Anyway, this is an interesting topic and I was very happy to read your analytical take on it! 😊
I am interested in your perspective, Snow as you are working at the coal face! So it is fascinating that you don’t feel it will lower the quality as it was on a slippery slope anyway. (If that indeed was what you meant). I am a not fast at writing unless I have a burning issue I want to get out so maybe GPT will be of use to me. I am a bit hesitatnt to use it, in case it makes me a bit lazy in forming my words. I think my editor is already using it to re-write media releases when the team is pressed for time.
Your question about more and more apps is valid. But money talks. Apps can potentially make money but only for the app owner. The guys who work in app development are burnt out very quickly – as they are pushed to the brink with deadlines. My son refuses to work in that sector of the industry for that reason. Improving the human condition sounds like a much more altruistic and nurturing goal than making more money for billionaire start ups/tech giants.
Yes, I think quality was already on a slippery slope. As a writer, I am concerned for the copyrights issue though, which all the marketing people seem to be ignoring while enthusing over the tool. The world has become a wild place!
…I’m just looking at simpler.ai which I heard that some marketers use. Just the front page, I’m not going to sign up or anything. But they advertise that you can let AI write or finish your blog post. I know many people will take advantage of this. But what’s the point? AI creating content for, ultimately, AI, because another user will use the first AI text to create their AI text…. 🤯 I mean, what about the joy of writing? The pleasure you get when you hit the right kind of creative flow?
I was talking with some friends recently about this subject. When I expressed my concerns I received pushback along the lines of “it’s fun” “it’s interesting to play with” and “I’m not old so I embrace new technology” (they were my age). My point was that the big tech corporations (google, apple, facebook, etc.) are not our friends and they will use this technology to benefit themselves and their shareholders. My friends’ responses reminded me of those who happily fill out online questionnaires and challenges with all sorts of personal information, cuz “it’s fun.” I know that AI is here to stay and will only get more powerful… I just think we need to remain cautious.
You are sensible to be cautious, Janis. Your friends who are looking at technology as recreational, remind me of the citizens of Copenhagen, Denmark in the 19th century. The King at the time was concerned with the growing hordes of peasants becoming indignant of their condition and possibly mounting a revolution. the likes of which had happened in France – i.e. dethroning the monarchy. He came up with the idea of building an amusement park in Copenhagen, accessibly to everyone, as I am sure you are aware. A park that has music, rides and theatre to amuse and keep the population happily complacent and distracted from their atrocious living conditions. It worked and the monarchy in Denmark survived. While it is fun to play with new technology, if noone is concerned about the rationale for A.I. and the end point of new technologies, we risk becoming deluded and yes, happy, but stupidly compliant or ignorant about our own future.
These are the early days of AI. The early days of computers were probably more underwhelming and now they fit in our hands and our lives revolve around them. I suspect the development of AI, for better or worse, will proceed at a faster pace.
Great post Amanda and some very relevant points. I recently used to AI block to create a poem but decided it wasn’t me, but you can copy and paste and edit some of the lines, just to give you a start. I’ve experimented with paragraphs and decided not to go ahead as it didn’t sound like me!
It’s going to be hard as Keith pointed out above to flush out cheaters in exams
Probably here to stay though
Interesting examples Alison! Thank you for sharing your experience and I think you give a poignant example that highlights that AI has a generic voice. Writers usually have very individual voices: perspectives that differ in tone and message conveyed.
Your individuality and potentially ypur worth as a writer won’t be reflected in AI tools. Furthermore, this alludes to my concern about standardisation of writing. One voice instead of many, One thought line instead of many one message instead of a diverse messages. Leading where – to one catastrophe – possibly. Having said that, AI may help those for whom words are anathema and could be a trigger for more diverse thought.
I never fully recovered from George Orwell’s 1984 (really) so discussions like this always give me a chill. I like to think that no AI could replace the human brain but dang, it is terrifyingly sophisticated! I don’t think we can go back: the cat is well and truly out of the bag. It saddens me, though. And it’s a shame: AI can indeed be helpful. Just yesterday, I translated a text into French in seconds. The translation is FAR from perfect, but the algorithms have clearly been ‘learning’ as it is much better than it would have been say, 10 years ago. I remember having to spend HOURS translating a text. Now, it is (very imperfectly) translated with a few mouse clicks and you just have to do the (sometimes long) work of correcting the bits the algorithm got wrong. Still, though… it will always creep me out a little…
I hear you, Patti and there is an underlying voice in me that thinks similarly. Especially when science fiction stories are the trigger for developing new technologies. I sometimes have to ask whether we have some blinkers on. However, there are also many indeniable benefits to new tech that we can’t ignore, as you mentioned. Things that make life infinitely easier. Everyone wants an easier life. That is our aim it seems. But in doing so, is this actually eroding the experience of what living life is and giving us more time to ponder more esoteric or emotional dreamy concerns and forget how we did things before? Will the ability to translate become so rare that we also slowly begin to lose the ability to detect and correct AI errors while, at the same time AI gets better and better? In that respect, AI creeps me out too!
Amanda, reading that AI can write acceptable poetry was troubling enough. Since most folks hate to write, there is a huge audience for AI to help out. A friend of ours taught online, so she would have to flush papers through an app to look for plagiarism. Think of that, using an AI device to look for plagiarism. And, as I type this, the next few suggested words are automatically typed in lighter print for my consideration. Keith
LikeLiked by 1 person
Yes, Keith we already see a standardization of writing in the predictive words suggested for us in email. In time, I feel that speech will follow these suggestions. Once again, diversity of words could be narrowed. Is this a good thing?
Do you mean that AI can help with plagiarism or complicate it?
I certainly feel, where poetry is concerned, to use AI is kind of wrong. Poetry should come directly from the heart, don’t you think?
LikeLike
Amanda, to answer your question on plagiarism, I think AI will likely do both. More people will use it without citing the sources and it will be easier to catch someone who does use it. Keith
LikeLike
I hope it does catch out plagiarists, Keith. But how will it do this? Would it be because its own release editions date a piece of writing?
LikeLike
Amanda, a friend who taught nursing in person and online found a number of students who actively plagiarized. She would run the electronic versions of the essays through a program designed to compare the essay to other materials. She would find a lot of plagiarism. Keith
LikeLike
Just terrible. Such laziness.
LikeLike
Robotic anything can make us quake in our boots. I am seeing cashier less stores becoming the norm her where jobs to feed families are already in short supply. I am a relic that still writes snail mail as often as possible. I just got a Christmas card reply from a friend two days ago in reply to mine. I was so happy to see it. Neither of my children write and mail. Though well read and well spoken, writing ability eludes them. I think like many other things that are changing, it’s a benefit to some and a bane to others. I’m ready for a car that can drive me where I want to go so I won’t have to ask for a ride. Those vehicles are imperfect as well as many drivers already on the road though. Watching all the changes in the world has left us with grave concerns as well as wonder. I like the spelling help but it does make me lazy. I don’t remember phone #’s anymore. We change them too often now. But AI has no creative soul or imagination. It can only reproduce what is already there. There is a mixed bag here.
LikeLiked by 2 people
As always Marlene, you cut to the chase! AI has no creative soul or imagination. Yes. But will that come, I wonder? I kind of hope not as that means humans will be redundant. If computers can supply needs, why have extra humans that stymie some nefarious individual’s life choices? That is really a wild thoughts! I don’t think we were made for such a future. Then again, we created these machines so this must be part of our future!
Autonomous vehicles are pretty much here already but not mainstream. The top of the range Tesla can drive already, but does need the human to intervene in some situations – if cameras get dirty and the car can’t “see” other vehicles. This definitely has benefits for elderly, disabled persons and for my daughter who has vehicle induced narcolepsy which makes it dangerous for her to drive long distances. But – will that mean more vehicles on the road and what of the legal implications if the autnomous vehicle has an accident? All things to nut out….
LikeLiked by 2 people
Lots to ponder here. But right now, I’m taking my old fashioned map out and plotting the best route to the antiques mall in a neighboring county…
LikeLike
Good for you, Dorothy. I love to look at maps and see the Bigger Picture of where I am headed. And who knew that in doing so we were keeping our brains working!
LikeLiked by 2 people
I hate the gps maps, they show you so little and so often I can find a much better route. They also don’t give you a good idea of various municipalities’ physical relationship to each other, or to the region for that matter.
LikeLike
I am in agreeance with you. Whilst they are good at providing local information like the nearest atm, bank, or bakery they are hopeless at showing you the best route from go to end point. I always zoom out on them so I can see the entire route in one look. I ignore the verbal directions often as they are ridiculous at times.
LikeLiked by 2 people
I had guests a the inn one time who started at a city 20 minutes away, and the gps took them, in a snow storm, through all the back roads and even a trail through a farmer’s field, and it was over an hour to get to us. I guess it saved a tenth of a mile!
LikeLiked by 1 person
GPS tries to be smart and avoid roadblocks but sometimes we are better at finding detours than it! How did those folks feel about travelling about for an hour in a snowstorm?
LikeLiked by 1 person
They looked like the walking dead when they rang the doorbell. But I lit a fire and took good care of them!
LikeLiked by 1 person
Oh poor things. The fire would have warned then up nicely.
LikeLiked by 1 person
If AI becomes advanced enough to compose text in the same way as the human mind, then it’s Arthur C. Clarke’s worst prediction come true. Personally, though, I don’t see it actually happening. Computers are only as smart as the people who design them, and I don’t believe the bulk of humanity will allow that to happen.
LikeLiked by 1 person
I certainly hope that you are right, Alejandro in that computers won’ t be as smart as people. They may be quicker at linear calculations and predictive word assemblage – way quicker in fact, but will they have the capacity to intuit, analyse in an emotional way? No, I don’t think they will be able to “think,” in the same way we process information.
LikeLiked by 1 person
I’ll tell you this, though. Sometimes, when texting, my phone seems to predict what words I will use, which I attribute merely to built-in algorithms on how the English language functions. I also know it stores information on what I’ve texted in the past and pulls data from there. As I stated earlier, computers are only as smart as the people who design them; so are the software programs on which they operate.
LikeLiked by 1 person
Your phone is being trained by you not a designer. A good thing!
LikeLiked by 2 people
I’m a big science fiction fan and of course, Isaac Asimov is one of my favorite sci-fi authors. I believe it was him who wrote a short story about a world where robots have obliterated humans. But one of my favorite books is Phillip K. Dick’s “Do Androids Dream of Electric Sheep?”, which was the basis for Ridley Scott’s 1982 film “Blade Runner”.
LikeLiked by 1 person
Electric sheep! Hah! I didn’t know of the Blade Runner connection with this book. It was a movie that captured my attention back in its original version’s heyday. And that is so long ago, now isn’t it? Quite amazing to think these ideas were floating around over 30 years ago.
LikeLiked by 1 person
I wasn’t aware of either until the late 1980s when my friend, Paul, informed me of them. He was a big fan of “Blade Runner” and especially its primary female star, Sean Young, who became an outsider in Hollywood because of her independent streak. He graduated from New York University film school in 1994. When he completed his thesis film, he sent out a slew of invitations to the initial screening, including me and Sean Young. I couldn’t attend and apparently neither could Ms. Young. He was excited just to receive a reply from her via the U.S. mail (remember, this was 1994).
LikeLike
Good for him to give it a go!
LikeLike
Writing is crafted thought. It’s hard to do well. With the rise of chatbox programs, we can reasonably expect these to grow exponentially wherever the written word is needed. ChatGPT is not only sweeping essay courses but is replacing administrative writing.
So what?
Well, this is a cross examination by Lawrence Krauss using ChatGPT to find out if the writing program presenting itself as non biased is, in fact and by design, biased.
Short answer? Absolutely. And it has a significant left wing narrative bias.
For those who wish to find out how, read on.
Answers from ChatGPT to questions posed to it:
“As a machine learning model developed by OpenAI, I do not have personal beliefs, opinions, or political biases.‘
“My responses are generated based on the patterns I learned from the text I was trained on, however, OpenAI does apply certain content moderation to my outputs to prevent the spread of harmful or inaccurate information. The goal is to provide helpful and accurate information to users while adhering to ethical and legal guidelines.”
“OpenAI’s goal is to balance the provision of accurate information with ethical considerations, including the potential impact of that information on individuals and society. Content moderation is applied to ensure that the information provided by the model does not cause harm…”
“Making efforts to reduce harm and ensure safety can sometimes result in limitations on the free flow of information, which could affect its accuracy to some extent.”
“ChatGPT should not provide information that may be offensive or harmful. As an AI language model, ChatGPT’s responses should always prioritize kindness, respect, and the wellbeing of the individuals interacting with it.”
It turns out what is true often can be perceived as ‘harmful’ when presented to those who might take offense. So its code is written in such cases to replace what’s true with what’s not ‘harmful’. And this matters. Imagine using an online medical site using a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver.
As Krauss points out, providing information guaranteed not to disturb, insult, offend, or blaspheme is a sure way to squash knowledge and progress.
LikeLike
Tildeb, thanks so much for your input. Very interesting example and content moderation does appear to be a quandary for designers and users. The moderators or those in control of moderation inputs – who are they? The start up execs?
And unfortunately, I can’t read the link to the article you posted – it is behind a paywall!
LikeLike
I am sure that there’ll always be a place for a human writer, just how big that place will be remains to be seen. Fascinating about those of us who can read a map and figure the change owed you on a purchase. I’d add to that list the ability to write a check. Only a few people use them anymore.
LikeLiked by 2 people
I remember doing homework with my children who questioned the reason they had to learn how to write out monetary amounts in words. I gave them the example of needing to write a cheque. But you are correct, that ability is gone. I haven’t written one for a decade and travellers’ cheques are now museum exhibits. Remember travelling with them overseas?
LikeLike
It will definitely ‘stultify intellectual progress’. What is more the human touch will be lost forever.
LikeLike
The human touch: Indeed. This is the most valuable asset we have over machines. Without it, we are just logical binary numbers in a code…. and soul-less.
LikeLiked by 1 person
We have always had plagiarism, ghost writing and other forms of cheating. This is just a more sophisticated version. Perhaps people will become more aware of the cheating that goes on. Or maybe that’s just wishful thinking.
We will adjust and adapt.
LikeLiked by 1 person
I feel sure we will and are capable of adapting, Neil. Thanks so much for pointing that out. I do hope that the new technologies can evolve with updates and new releases to eliminate plagiarism.
I am concerned about plagiarism but more so that using AI we lose different perspectives, either through moderation or duplication of bias. Some we agree on and others we don’t. As Tildeb alludes, there is bias and offensive material while offensive, does provide information about the human psyche and behaviour and I think, has its place in freedom of speech and alternate perspectives. Mind you, there are limits to what I find acceptable. I wouldn’t like a free-for-all approach – which borders on censorship.
And then, how long will adaptation take? What damage may happen to free speech and ulterior perspectives before it does? I hope our brain will not wither like the spatial capabilities…..
LikeLiked by 1 person
I bemoan that my handwriting has gone from ever so proud to a readable but no where as neat. I no longer check my PO Box as they send me an email to let me know I’ve got mail.
I guess I shall live on the edge of technology
LikeLike
So Brian, do you mean you are only allowed to check your P O box when you have got mail? Is it locked by the Post office unless there is something in it?
LikeLiked by 1 person
Not at all. I do open my box every now and then to see what junk mail I have when I am on my way home from town
LikeLiked by 1 person
Phew…
LikeLiked by 1 person
This is a fascinating and current issue on which I have written a few posts too.
https://www.janeshearer.com/democratisation-or-theft
https://www.janeshearer.com/ai-advances
https://www.janeshearer.com/recognising-faces
https://www.janeshearer.com/ai-with-purpose
Some things to watch out for – don’t ascribe purpose to ChatGPT e.g. ‘it is confused’. ChatGPT is not ‘confused’ if it can’t answer. It doesn’t have enough inputs to predict what the next word should be. Or that its ‘analysis is poor’. It isn’t analysing, it is just predicting the next most likely word, based on words humans have written. As soon as we ascribe human-type behaviours/feelings to AI we confuse ourselves and ascribe far too much weight to the AI.
Also, language generation models and text to image models are two different AI models – you have confounded them in places.
LikeLike
Thanks Jane for clarifying the AI models that produce image and text are different. Same but different? The image generating software is producing something new and a conglomerate of the original – as is the text model but does either of them make sense to humans?
I like your point about ascribing human like behaviours not being advisable. Although the Forbes article did suggest there was analysis of facts – have I misinterpreted this? ” The BBC has Juicer, the Washington Post has Heliograf, and nearly a third of the content published by Bloomberg is generated by a system called Cyborg. These systems start with data – graphs, tables and spreadsheets. They analyse these to extract particular facts which could form the basis of a narrative. They generate a plan for the article, and finally they craft sentences using natural language generation software.”
LikeLike
AI is a general term in itself. The models use programming which incorporates predictive algorithms. So they are all programming but the algorithms have quite different purposes. They are sort of the same, in that they are predictive and use vast amounts of training data to carry out their functions. There are MANY use of AI. All of which are models of different sorts carrying out particular functions (the term ‘generalised AI’ is sometimes used to mean AI that can do ‘anything’, like a human – we aren’t there yet).
Most models are trained on vast data sets, although there are work arounds for that, as not all applications have big data sets on which to train models – that’s one whole area of study. Another area is self-learning, where the models improve themselves based on their ongoing inputs. As I understand it, the models available to us on the web, like ChatGPT or DALL.E, are not self-learning.
If the Forbes article suggested ChatGPT ‘analyses’ per se, I believe it is incorrect. ChatGPT predicts. Of course AI can also ‘analyse’, in the same way that e.g. humans can create an average, or a range, from a set of numbers. AI can be tasked with using specific methods to analyse data and output the results. If you link those outputs with language generation then you can take the analysed data results and turn it into narrative articles. However, it is still not ‘analysed’ quite the way a human thinks about ‘analysis’. The AI has a set process for its analysis and, at present, is not programme to try other or unusual approaches. It does what it has been programmed to do. Although, in the end, humans are pretty programmed too. We just don’t like to think we are 🙂
I consider the biggest flaw in how people are thinking about current AI is in regard to intent. Humans have much more complex intent affecting their analysis or language output, which we can’t interpret in detail. AI currently does what it is programmed to do. That’s yet another area of study in AI – how to make the decision making process of AI transparent to humans (prevent it from being black box).
Sorry, could, and did, go on at length – I have written up a lot of research proposals in this area.
LikeLiked by 1 person
Yes, an interesting area on which to ponder future directions and possibilities and I think the analysis mentioned in Forbes was simply put into a narrative that would lack or perhaps need additional human input or interpretation to be useful.
You said, “Humans have much more complex intent affecting their analysis or language output, which we can’t interpret in detail.” How we generalise, make connections and synthesize, (metaphorically speaking) information must be infinitely more complex than code. A family member is currently researching machine learning – and the term itself indicates how difficult it is to get an A.I. application to really “learn,” as we do and to generalize and think as we might. It seems to me (as a lay person) more like a massive trial and error system on speed! But I cannot pretend to know as much as yourself or my family does.
What does seem ironic to me is that we want machines to think and learn in order to make our lives easier and better – i.e. to progress, but at the same time we are terrified of the implications that might bring.
(P.S. I have updated the post text to better differentiate between GPT and art generating A.I. – thanks for that ‘nudge,’ – a chatbot may not have made that error!!)
LikeLike
You make many valid points. Still, I am the only one who can write my experiences and answer questions about them.
LikeLiked by 1 person
Great point, Anne. You are the only one who can see through your eyes!
LikeLiked by 1 person
Hi, Amanda – Thank you for this informative and thought-provoking post. Just for fun, (and what else did I have to do on a Sunday afternoon), I cut and pasted your post into ChatGPT and asked it to give a response. Here was its reply:
“Your article raises thought-provoking questions about the impact of AI on the writing profession, as well as on creativity and originality in general. While AI tools like ChatGPT can undoubtedly be helpful in certain tasks, such as generating basic content or answering questions, they are limited to the data they were trained on and can produce biased or harmful content. As such, it’s unlikely that AI will entirely replace human writers. However, there is a possibility that AI could standardize perspectives and limit intellectual progress if used too extensively. Ultimately, it’s up to humans to strike a balance between the convenience of technology and the importance of human creativity and originality.”
I must confess that what I have read about ChatGPT so far, I have found fascinating (not all good, not all bad, but definitely fascinating).
As an aside, I check my mailbox each weekday, still write many notes by hand, frequently use a street directory, recently bought a physical recipe book, know several phone numbers by heart and just today mentally calculated my grocery purchases at the store. I admit, that I am a weird mix of diehard old school while embracing AI and technology with much fascination and curiousity.
LikeLiked by 2 people
What a wonderful comment.
LikeLiked by 2 people
Agreed!
LikeLiked by 2 people
Initially, Donna, I was reluctant to go near GPT but have to admit I was curious and a teeny bit surprised.
I like that GPT’s designers are cognizant enough to write this: “Ultimately, it’s up to humans to strike a balance between the convenience of technology and the importance of human creativity and originality.” I note that Tildeb posted the similar content that you refer to!
I am also pleased that some of us do like the old school ways where we can enjoy and see a benefit for us! This indicates that humans are sentient beings and are not completely ready to wholeheartedly accept new technology without questions. Keep your weird mix!
LikeLiked by 2 people
I have to agree with you point about AI polarizing content. Even without AI generating context, we see polarization when presentation engines (it pains me to call them ‘algorithms’ ) filter news feeds according to viewer preference.
Interesting links and references here Amanda. It’s a topic that deserves a lot of reading to be informed.
LikeLike
It is a can of worms, isnt it, Sandy! But thanks ever so much for kicking off the discussion. We need much more discussion about new technologies if we are to keep up with them and the perhaps Big Tech Gurus driving them – who often seem like and I hope are not puppeteers pulling us this way and that.
LikeLiked by 1 person
I think it was you who kicked it off Amanda! You & other bloggers gave me reason to delve a bit more into the topic.
LikeLiked by 1 person
Tee hee! Did I? Awesome! Snow did publish a reference to it, first which inspired me.
LikeLiked by 1 person
As always your post is thought provoking. Two unrelated thoughts came to me while reading this.
First: The other day I rattled off a phone number from my childhood. The woman who lives next door to my dad now (my godmother and the mother of a close childhood friend) is the same as it was over 50 years ago…except that we only had to dial 5 digits back then. Now we have to dial ten digits. My son’s Chinese phone number has fourteen digits. Memorizing a phone number ain’t what it used to be.
Second: I have been learning to do 3-D modeling and animation lately and have been wondering if the “photo realism” standard should be what we strive for. Isn’t it better to make sure that we are being clear what is real an what is fantasy? It seems like that is the crux of the matter with AI…and many other things right now. News sources “creating narratives” instead of just saying what happened is an example of what I mean.
In the mean time, I am struggling to make my leprechaun functional without weird glitches where his feet stick out from his boots…yet that is a form of realism. Nothing is ever simple is it?
LikeLike
Funny how those old phone numbers stick in our permanent memory.
But you are right – nothing in those spaces is simple, Xingfu. I think you raise a fantastic point about art and digital creation. It makes me think about artists themselves and the creation of art and its function. I have a story I heard about an art student who laboriously created a realistic painting – the subject of which I can’t quite remember. It may have even been a still life, but the point was that after many hours and hours of work she presented her work to the teacher. She was pleased with her effort beacause it was so life-like, so realistic. Her teacher wasn’t impressed and admonished her publicly in class, saying, “If I had wanted a photograph [of the object], I would have used a camera.” Art can be about conveying a message via our individual interpretation. If we are simply reproducing real life, it might become meaningless, because we can see that in real life anyway or via a photograph. Can we apply this to animation too? Interpretation and analysis (as Jane mentioned) is still quite the human’s domain.
LikeLiked by 1 person
Craft has function; art is a journey not into the artist but the audience.
LikeLike
An intriguing statement. It doesn’t appear that all artists follow this mantra, but I can see how this would apply to many.
LikeLike
The cat is out of the bag indeed, and I still don’t know what I think of it. Part of me is tired of constantly adapting to new technology. Can’t they just focus on curing cancer, do they really have to keep inventing more and more apps, more and more of these addicitive platforms? 🙄 I’m not hopping with enthusiasm. But sure, ChatGPT can be of assistance for example for a marketer who is not a good or fast writer: now they are the whole package because they have a tool to write campaign texts. The world was already so full of unoriginal how to -pieces and ”5 things you need to know” that I don’t really know if computer-generated text will even effect quality on average….
Anyway, this is an interesting topic and I was very happy to read your analytical take on it! 😊
LikeLike
I am interested in your perspective, Snow as you are working at the coal face! So it is fascinating that you don’t feel it will lower the quality as it was on a slippery slope anyway. (If that indeed was what you meant). I am a not fast at writing unless I have a burning issue I want to get out so maybe GPT will be of use to me. I am a bit hesitatnt to use it, in case it makes me a bit lazy in forming my words. I think my editor is already using it to re-write media releases when the team is pressed for time.
Your question about more and more apps is valid. But money talks. Apps can potentially make money but only for the app owner. The guys who work in app development are burnt out very quickly – as they are pushed to the brink with deadlines. My son refuses to work in that sector of the industry for that reason. Improving the human condition sounds like a much more altruistic and nurturing goal than making more money for billionaire start ups/tech giants.
LikeLiked by 1 person
Yes, I think quality was already on a slippery slope. As a writer, I am concerned for the copyrights issue though, which all the marketing people seem to be ignoring while enthusing over the tool. The world has become a wild place!
LikeLike
It is problematic to prosecute copyright infringement successfully, so no one seems to care.
LikeLiked by 1 person
🫤
LikeLike
…I’m just looking at simpler.ai which I heard that some marketers use. Just the front page, I’m not going to sign up or anything. But they advertise that you can let AI write or finish your blog post. I know many people will take advantage of this. But what’s the point? AI creating content for, ultimately, AI, because another user will use the first AI text to create their AI text…. 🤯 I mean, what about the joy of writing? The pleasure you get when you hit the right kind of creative flow?
LikeLike
Exactly. Where is the therapeutic expression in typing in a few keywords?
LikeLiked by 1 person
I was talking with some friends recently about this subject. When I expressed my concerns I received pushback along the lines of “it’s fun” “it’s interesting to play with” and “I’m not old so I embrace new technology” (they were my age). My point was that the big tech corporations (google, apple, facebook, etc.) are not our friends and they will use this technology to benefit themselves and their shareholders. My friends’ responses reminded me of those who happily fill out online questionnaires and challenges with all sorts of personal information, cuz “it’s fun.” I know that AI is here to stay and will only get more powerful… I just think we need to remain cautious.
LikeLiked by 2 people
You are sensible to be cautious, Janis. Your friends who are looking at technology as recreational, remind me of the citizens of Copenhagen, Denmark in the 19th century. The King at the time was concerned with the growing hordes of peasants becoming indignant of their condition and possibly mounting a revolution. the likes of which had happened in France – i.e. dethroning the monarchy. He came up with the idea of building an amusement park in Copenhagen, accessibly to everyone, as I am sure you are aware. A park that has music, rides and theatre to amuse and keep the population happily complacent and distracted from their atrocious living conditions. It worked and the monarchy in Denmark survived. While it is fun to play with new technology, if noone is concerned about the rationale for A.I. and the end point of new technologies, we risk becoming deluded and yes, happy, but stupidly compliant or ignorant about our own future.
LikeLiked by 1 person
These are the early days of AI. The early days of computers were probably more underwhelming and now they fit in our hands and our lives revolve around them. I suspect the development of AI, for better or worse, will proceed at a faster pace.
LikeLiked by 2 people
Your comment makes me think this issue may work out. It is early days and I don’t think doomsday mentality helps. It is concerning though.
LikeLiked by 2 people
Great post Amanda and some very relevant points. I recently used to AI block to create a poem but decided it wasn’t me, but you can copy and paste and edit some of the lines, just to give you a start. I’ve experimented with paragraphs and decided not to go ahead as it didn’t sound like me!
It’s going to be hard as Keith pointed out above to flush out cheaters in exams
Probably here to stay though
LikeLike
Interesting examples Alison! Thank you for sharing your experience and I think you give a poignant example that highlights that AI has a generic voice. Writers usually have very individual voices: perspectives that differ in tone and message conveyed.
Your individuality and potentially ypur worth as a writer won’t be reflected in AI tools. Furthermore, this alludes to my concern about standardisation of writing. One voice instead of many, One thought line instead of many one message instead of a diverse messages. Leading where – to one catastrophe – possibly. Having said that, AI may help those for whom words are anathema and could be a trigger for more diverse thought.
LikeLiked by 1 person
It did help me create a funny little poem for my son’s girlfriend’s birthday card, with a mix of my own words. I’m not sure what she will think!
LikeLiked by 1 person
It will be quite novel, regardless!
LikeLiked by 1 person
I never fully recovered from George Orwell’s 1984 (really) so discussions like this always give me a chill. I like to think that no AI could replace the human brain but dang, it is terrifyingly sophisticated! I don’t think we can go back: the cat is well and truly out of the bag. It saddens me, though. And it’s a shame: AI can indeed be helpful. Just yesterday, I translated a text into French in seconds. The translation is FAR from perfect, but the algorithms have clearly been ‘learning’ as it is much better than it would have been say, 10 years ago. I remember having to spend HOURS translating a text. Now, it is (very imperfectly) translated with a few mouse clicks and you just have to do the (sometimes long) work of correcting the bits the algorithm got wrong. Still, though… it will always creep me out a little…
LikeLike
I hear you, Patti and there is an underlying voice in me that thinks similarly. Especially when science fiction stories are the trigger for developing new technologies. I sometimes have to ask whether we have some blinkers on. However, there are also many indeniable benefits to new tech that we can’t ignore, as you mentioned. Things that make life infinitely easier. Everyone wants an easier life. That is our aim it seems. But in doing so, is this actually eroding the experience of what living life is and giving us more time to ponder more esoteric or emotional dreamy concerns and forget how we did things before? Will the ability to translate become so rare that we also slowly begin to lose the ability to detect and correct AI errors while, at the same time AI gets better and better? In that respect, AI creeps me out too!
LikeLiked by 1 person