blogging

Will Writing Bots and Artificial Intelligence Make Writers Redundant?

The Growth of ChatGPT

  • NetFlix took 3.5 years to reach 1 million users.
  • Facebook took 10 months to reach 1 million users
  • Instagram took 2.5 months to reach 1 million users
  • It took Spotify 5 months to reach 1 million users

After one week of launching, ChatGPT gained 1 million users…. and two months after launch, ChatGPT had 100 million active users [zdnet.com]

Tiktok took 9 months to reach 9 million users.

Just let that sit for comparison.

As Writers: – are you concerned about the exponential uptake of Artificial Intelligence and its ramifications?

What Exactly is ChatGPT?

ChatGPT is a language processing tool driven by AI technology that answers questions, and assists with tasks such as writing emails, essays, and code.

ChatGPT works via a Generative Pre-trained Transformer (GPT), a model that has been trained on a LLM (a language learning model) from vast amounts of information from the internet. This includes: websites, books and news articles. It learns the context of sequential data that has been input into the system. Meaning it tracks relationships between words from the data set it was trained on.

This application doesn’t just answer questions; ChatGPT and other generating A.I. models can describe art in detail, have philosophical conversations, create emails or sales campaigns, fix broken computer code, or improve customer support.

Currently free of charge, due to the creator’s research and feedback-collection phase, (premium access starts at $20 per month), the chatbot tool was created by OpenAI, an A.I. Research Company. A start-up that presumably controls what is input into this life-changing tool.

An art generator creates a thorny copyright issue – as apparently “billions of copyright images” were used in a training data set “without compensation or consent from the artists.” But in a fast-moving world, legislative adjustment is a slow-moving beast and prosecution for copyright infringement is complex, expensive and lengthy.

Jobs – a Lost or an Endangered Species

Do you foresee customer support writing and marketing jobs potentially disappearing?

Will A.I. be the content creator of choice when writing social media posts and advertising campaigns, or even a book?

Blogger Snow mentioned how ChatGPT and Midjourney helped Ammaar Reshi create a children’s book in a weekend. Now Reshi is creating an animated Batman video he put together using a ChatGPT generated script, images and edited voice-over and video using Adobe AI and the phone app, Motionleap.

Reshi stated:

“With any kind of new tech that is incredibly powerful, it’s somewhat threatening to people,” he said, adding: “You see people wondering, ‘Will this replace my job?’ … That concern — we shouldn’t pretend like it isn’t a serious one.”

Shudu, the “world’s first digital supermodel,” was created through artificial intelligence and has been used in a Louis Vuitton ad. While bizarre, this may or may not altogether be a bad thing for women with body image concerns – if we can separate the digital from reality.

Photo by Tara Winstead on Pexels.com

The Implications of The Age of Artificial Intelligence

Are we seeing the dawn of an age when freelance writers become a relic, a historical job that once existed in the past, like a projectionist?

Google searches have already been superseded by ChatGPT as the tool tailors its response in human-like prose.

Human beings may be less reliable than chatbots, but they do produce original material, whereas A.I. tools are limited to the data they were trained on.

Chatbots have limited knowledge of world events after 2021 and may also occasionally produce harmful instructions or biased content, according to an OpenAI FAQ. The way you ask the question — the prompting — can have an important effect on the quality of the result.” [computerworld.com]

Although tempting to use, might our enthusiasm and potential overuse of A.I. lead to a standardisation of opinion and perspectives? If so, could this unify communities or stultify intellectual progress?

Will humans become relegated to the fringe of intellectual pursuit? Left on the shelf like a World Book Encyclopedia?

Where am I

Toby Walsh, one of only 10,000 or so individuals, with a PhD in Artificial Intelligence, suggests that technology does make us lazy. Without use, our brain capability begins to shrink: for e.g. Toby considers we are one of the last generations to be able to read a map. Spatial intelligence has decreased with a lack of neural stimulation from use of GPS navigation. [And here I was blaming it on ageing!]

With technology thinking for us, might our brains shrink further?

Technology Makes Us Lazy

  • Who bothers composing a handwritten letter anymore? I haven’t even checked my letter box for possibly a year or more – although the M.o.t.h. does it – once a month.

  • When was the last time you pulled out a street directory or remembered directions? My 20-something daughter does not even know what that is.

  • Do you still purchase recipe books or use Google to search online?

  • Who remembers telephone numbers off by heart?

  • Do you still calculate several grocery or product purchases or the change the cashier gives you, in your head?

‘The Cat is out of the Bag’

These products will eliminate ‘artist’ as a viable career path,” a release from the Joseph Saveri law firm stated. “The thing is: …There’s no going back, so I don’t think litigation is going to stop these platforms from continuously developing and gathering up as much data as they can,” he said. “It’s going to keep happening.” [Nik Thompson]

Chatbots Are Not Perfect

But A.I. has flaws.

Blogger Sandy spoke recently about disruptive technology ad referred me to an article that suggested A.I. assistive writing tools already analyse data and produce articles using natural language generation software.

However, the article also pointed out, ‘They cannot write articles with flair, imagination, or in-depth analysis.’ They may not have made writers redundant, but they have increased the number of niche articles written. Niche articles are growing in popularity, and we are already presented with a personalised selection of ‘Your News,’ each morning on our browser ads. If the reader wants more, the trend is towards a paywall and subscription model – users-pay systems.

The article also suggested that article generating systems won’t replace writers because readers want to read opinion and analysis.

Personally, I would rather be presented with a balanced story of facts and make up my own opinion, rather than hear it from a journalist or expert. But I take their point.

Chatbots may give false information.

ChatGPT is not abreast of local news and idioms in every region, only the data that’s been input into its training. Incorrect information and graphics are therefore frequently created.

For example: type in MBRC (our local Council) and you will get something located in the USA, not Australia. Type in “Norwegian Rosemaling,” to the graphic generator and yes you will get a new piece of art, but it will look like some kind of distorted, mutant, William Morris wanna-be design – in other words, utter rubbish. The human touch is still relevant and necessary.

Writers Still Have Relevance

A.I. tools are easily confused by relatively simple, existential questions, as some IT nerds have famously posted on social media.

And as for proofreading, the human mind still retains the edge over any automated tool.

So take that, A.I.

Content thieves, not writers, may be the ones who are now redundant. (And take that: compulsive re-bloggers who do not credit original writers).

Like an encyclopedia, we may not quite ready to be left on the shelf – yet!

stpa logo

99 thoughts on “Will Writing Bots and Artificial Intelligence Make Writers Redundant?”

  1. Amanda, reading that AI can write acceptable poetry was troubling enough. Since most folks hate to write, there is a huge audience for AI to help out. A friend of ours taught online, so she would have to flush papers through an app to look for plagiarism. Think of that, using an AI device to look for plagiarism. And, as I type this, the next few suggested words are automatically typed in lighter print for my consideration. Keith

    Liked by 3 people

    1. Yes, Keith we already see a standardization of writing in the predictive words suggested for us in email. In time, I feel that speech will follow these suggestions. Once again, diversity of words could be narrowed. Is this a good thing?
      Do you mean that AI can help with plagiarism or complicate it?
      I certainly feel, where poetry is concerned, to use AI is kind of wrong. Poetry should come directly from the heart, don’t you think?

      Liked by 1 person

    2. Amanda, to answer your question on plagiarism, I think AI will likely do both. More people will use it without citing the sources and it will be easier to catch someone who does use it. Keith

      Liked by 1 person

      1. Amanda, a friend who taught nursing in person and online found a number of students who actively plagiarized. She would run the electronic versions of the essays through a program designed to compare the essay to other materials. She would find a lot of plagiarism. Keith

        Like

      2. Yes, laziness. But also – are we really okay with nurses who haven’t learned the information taught? Umm, NO!

        Seriously, the whole discussion of AI makes me want to live in a cabin in the woods. I’m trying to be informed on the topic, but everything I read makes me angry that our society permits companies to embark on this type of thing that will drastically impact our society, but without any input from us! We should require that it be put on a ballot before allowing them to continue. But we won’t (hence the desire to live in the woods).

        Nina (new to your blog)

        Liked by 1 person

      3. A big welcome to my blog., Nina. I can empathise with your idea to remove yourself from the downsides of society and the tech revolution. It can be quite depressing and legislative mechanisms are very slow at addressing such rapid changes. This is private enterprise at work, something that is a pillar of the free market and capitalism. Although noone wants state control over every little aspect of life, it would be good to have a vetting process before it is released on to the market. Could they really anticipate how new inventions will work before release? I remember seeing a doco interviewing the guy that invented the ‘like’ button on social media. He thought he had come up with a simple beautiful way of spreading positivity and his colleagues and him never imagined that liking something or not could send so powerful a message for bad as well as good and increase jealousies and competition. A.I. is a thorny issue and I could foresee it having drastic consequences as we cannot trust everyone to have good intentions.

        Like

  2. Robotic anything can make us quake in our boots. I am seeing cashier less stores becoming the norm her where jobs to feed families are already in short supply. I am a relic that still writes snail mail as often as possible. I just got a Christmas card reply from a friend two days ago in reply to mine. I was so happy to see it. Neither of my children write and mail. Though well read and well spoken, writing ability eludes them. I think like many other things that are changing, it’s a benefit to some and a bane to others. I’m ready for a car that can drive me where I want to go so I won’t have to ask for a ride. Those vehicles are imperfect as well as many drivers already on the road though. Watching all the changes in the world has left us with grave concerns as well as wonder. I like the spelling help but it does make me lazy. I don’t remember phone #’s anymore. We change them too often now. But AI has no creative soul or imagination. It can only reproduce what is already there. There is a mixed bag here.

    Liked by 3 people

    1. As always Marlene, you cut to the chase! AI has no creative soul or imagination. Yes. But will that come, I wonder? I kind of hope not as that means humans will be redundant. If computers can supply needs, why have extra humans that stymie some nefarious individual’s life choices? That is really a wild thoughts! I don’t think we were made for such a future. Then again, we created these machines so this must be part of our future!
      Autonomous vehicles are pretty much here already but not mainstream. The top of the range Tesla can drive already, but does need the human to intervene in some situations – if cameras get dirty and the car can’t “see” other vehicles. This definitely has benefits for elderly, disabled persons and for my daughter who has vehicle induced narcolepsy which makes it dangerous for her to drive long distances. But – will that mean more vehicles on the road and what of the legal implications if the autnomous vehicle has an accident? All things to nut out….

      Liked by 3 people

    1. Good for you, Dorothy. I love to look at maps and see the Bigger Picture of where I am headed. And who knew that in doing so we were keeping our brains working!

      Liked by 3 people

      1. I hate the gps maps, they show you so little and so often I can find a much better route. They also don’t give you a good idea of various municipalities’ physical relationship to each other, or to the region for that matter.

        Like

      2. I am in agreeance with you. Whilst they are good at providing local information like the nearest atm, bank, or bakery they are hopeless at showing you the best route from go to end point. I always zoom out on them so I can see the entire route in one look. I ignore the verbal directions often as they are ridiculous at times.

        Liked by 2 people

      3. I had guests a the inn one time who started at a city 20 minutes away, and the gps took them, in a snow storm, through all the back roads and even a trail through a farmer’s field, and it was over an hour to get to us. I guess it saved a tenth of a mile!

        Liked by 1 person

      4. GPS tries to be smart and avoid roadblocks but sometimes we are better at finding detours than it! How did those folks feel about travelling about for an hour in a snowstorm?

        Liked by 1 person

  3. If AI becomes advanced enough to compose text in the same way as the human mind, then it’s Arthur C. Clarke’s worst prediction come true. Personally, though, I don’t see it actually happening. Computers are only as smart as the people who design them, and I don’t believe the bulk of humanity will allow that to happen.

    Liked by 2 people

    1. I certainly hope that you are right, Alejandro in that computers won’ t be as smart as people. They may be quicker at linear calculations and predictive word assemblage – way quicker in fact, but will they have the capacity to intuit, analyse in an emotional way? No, I don’t think they will be able to “think,” in the same way we process information.

      Liked by 1 person

      1. I’ll tell you this, though. Sometimes, when texting, my phone seems to predict what words I will use, which I attribute merely to built-in algorithms on how the English language functions. I also know it stores information on what I’ve texted in the past and pulls data from there. As I stated earlier, computers are only as smart as the people who design them; so are the software programs on which they operate.

        Liked by 1 person

      2. I’m a big science fiction fan and of course, Isaac Asimov is one of my favorite sci-fi authors. I believe it was him who wrote a short story about a world where robots have obliterated humans. But one of my favorite books is Phillip K. Dick’s “Do Androids Dream of Electric Sheep?”, which was the basis for Ridley Scott’s 1982 film “Blade Runner”.

        Liked by 1 person

      3. Electric sheep! Hah! I didn’t know of the Blade Runner connection with this book. It was a movie that captured my attention back in its original version’s heyday. And that is so long ago, now isn’t it? Quite amazing to think these ideas were floating around over 30 years ago.

        Liked by 1 person

      4. I wasn’t aware of either until the late 1980s when my friend, Paul, informed me of them. He was a big fan of “Blade Runner” and especially its primary female star, Sean Young, who became an outsider in Hollywood because of her independent streak. He graduated from New York University film school in 1994. When he completed his thesis film, he sent out a slew of invitations to the initial screening, including me and Sean Young. I couldn’t attend and apparently neither could Ms. Young. He was excited just to receive a reply from her via the U.S. mail (remember, this was 1994).

        Like

  4. Writing is crafted thought. It’s hard to do well. With the rise of chatbox programs, we can reasonably expect these to grow exponentially wherever the written word is needed. ChatGPT is not only sweeping essay courses but is replacing administrative writing.

    So what?

    Well, this is a cross examination by Lawrence Krauss using ChatGPT to find out if the writing program presenting itself as non biased is, in fact and by design, biased.

    Short answer? Absolutely. And it has a significant left wing narrative bias.

    For those who wish to find out how, read on.

    Answers from ChatGPT to questions posed to it:

    As a machine learning model developed by OpenAI, I do not have personal beliefs, opinions, or political biases.

    “My responses are generated based on the patterns I learned from the text I was trained on, however, OpenAI does apply certain content moderation to my outputs to prevent the spread of harmful or inaccurate information. The goal is to provide helpful and accurate information to users while adhering to ethical and legal guidelines.”

    “OpenAI’s goal is to balance the provision of accurate information with ethical considerations, including the potential impact of that information on individuals and society. Content moderation is applied to ensure that the information provided by the model does not cause harm…”

    “Making efforts to reduce harm and ensure safety can sometimes result in limitations on the free flow of information, which could affect its accuracy to some extent.”

    ChatGPT should not provide information that may be offensive or harmful. As an AI language model, ChatGPT’s responses should always prioritize kindness, respect, and the wellbeing of the individuals interacting with it.”

    It turns out what is true often can be perceived as ‘harmful’ when presented to those who might take offense. So its code is written in such cases to replace what’s true with what’s not ‘harmful’. And this matters. Imagine using an online medical site using a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver.

    As Krauss points out, providing information guaranteed not to disturb, insult, offend, or blaspheme is a sure way to squash knowledge and progress.

    Liked by 1 person

    1. Tildeb, thanks so much for your input. Very interesting example and content moderation does appear to be a quandary for designers and users. The moderators or those in control of moderation inputs – who are they? The start up execs?
      And unfortunately, I can’t read the link to the article you posted – it is behind a paywall!

      Liked by 1 person

  5. I am sure that there’ll always be a place for a human writer, just how big that place will be remains to be seen. Fascinating about those of us who can read a map and figure the change owed you on a purchase. I’d add to that list the ability to write a check. Only a few people use them anymore.

    Liked by 3 people

    1. I remember doing homework with my children who questioned the reason they had to learn how to write out monetary amounts in words. I gave them the example of needing to write a cheque. But you are correct, that ability is gone. I haven’t written one for a decade and travellers’ cheques are now museum exhibits. Remember travelling with them overseas?

      Like

  6. We have always had plagiarism, ghost writing and other forms of cheating. This is just a more sophisticated version. Perhaps people will become more aware of the cheating that goes on. Or maybe that’s just wishful thinking.

    We will adjust and adapt.

    Liked by 2 people

    1. I feel sure we will and are capable of adapting, Neil. Thanks so much for pointing that out. I do hope that the new technologies can evolve with updates and new releases to eliminate plagiarism.
      I am concerned about plagiarism but more so that using AI we lose different perspectives, either through moderation or duplication of bias. Some we agree on and others we don’t. As Tildeb alludes, there is bias and offensive material while offensive, does provide information about the human psyche and behaviour and I think, has its place in freedom of speech and alternate perspectives. Mind you, there are limits to what I find acceptable. I wouldn’t like a free-for-all approach – which borders on censorship.
      And then, how long will adaptation take? What damage may happen to free speech and ulterior perspectives before it does? I hope our brain will not wither like the spatial capabilities…..

      Liked by 1 person

  7. I bemoan that my handwriting has gone from ever so proud to a readable but no where as neat. I no longer check my PO Box as they send me an email to let me know I’ve got mail.
    I guess I shall live on the edge of technology

    Liked by 1 person

  8. This is a fascinating and current issue on which I have written a few posts too.
    https://www.janeshearer.com/democratisation-or-theft
    https://www.janeshearer.com/ai-advances
    https://www.janeshearer.com/recognising-faces
    https://www.janeshearer.com/ai-with-purpose

    Some things to watch out for – don’t ascribe purpose to ChatGPT e.g. ‘it is confused’. ChatGPT is not ‘confused’ if it can’t answer. It doesn’t have enough inputs to predict what the next word should be. Or that its ‘analysis is poor’. It isn’t analysing, it is just predicting the next most likely word, based on words humans have written. As soon as we ascribe human-type behaviours/feelings to AI we confuse ourselves and ascribe far too much weight to the AI.

    Also, language generation models and text to image models are two different AI models – you have confounded them in places.

    Liked by 1 person

    1. Thanks Jane for clarifying the AI models that produce image and text are different. Same but different? The image generating software is producing something new and a conglomerate of the original – as is the text model but does either of them make sense to humans?
      I like your point about ascribing human like behaviours not being advisable. Although the Forbes article did suggest there was analysis of facts – have I misinterpreted this? ” The BBC has Juicer, the Washington Post has Heliograf, and nearly a third of the content published by Bloomberg is generated by a system called Cyborg. These systems start with data – graphs, tables and spreadsheets. They analyse these to extract particular facts which could form the basis of a narrative. They generate a plan for the article, and finally they craft sentences using natural language generation software.”

      Liked by 1 person

      1. AI is a general term in itself. The models use programming which incorporates predictive algorithms. So they are all programming but the algorithms have quite different purposes. They are sort of the same, in that they are predictive and use vast amounts of training data to carry out their functions. There are MANY use of AI. All of which are models of different sorts carrying out particular functions (the term ‘generalised AI’ is sometimes used to mean AI that can do ‘anything’, like a human – we aren’t there yet).

        Most models are trained on vast data sets, although there are work arounds for that, as not all applications have big data sets on which to train models – that’s one whole area of study. Another area is self-learning, where the models improve themselves based on their ongoing inputs. As I understand it, the models available to us on the web, like ChatGPT or DALL.E, are not self-learning.

        If the Forbes article suggested ChatGPT ‘analyses’ per se, I believe it is incorrect. ChatGPT predicts. Of course AI can also ‘analyse’, in the same way that e.g. humans can create an average, or a range, from a set of numbers. AI can be tasked with using specific methods to analyse data and output the results. If you link those outputs with language generation then you can take the analysed data results and turn it into narrative articles. However, it is still not ‘analysed’ quite the way a human thinks about ‘analysis’. The AI has a set process for its analysis and, at present, is not programme to try other or unusual approaches. It does what it has been programmed to do. Although, in the end, humans are pretty programmed too. We just don’t like to think we are 🙂

        I consider the biggest flaw in how people are thinking about current AI is in regard to intent. Humans have much more complex intent affecting their analysis or language output, which we can’t interpret in detail. AI currently does what it is programmed to do. That’s yet another area of study in AI – how to make the decision making process of AI transparent to humans (prevent it from being black box).

        Sorry, could, and did, go on at length – I have written up a lot of research proposals in this area.

        Liked by 1 person

      2. Yes, an interesting area on which to ponder future directions and possibilities and I think the analysis mentioned in Forbes was simply put into a narrative that would lack or perhaps need additional human input or interpretation to be useful.
        You said, “Humans have much more complex intent affecting their analysis or language output, which we can’t interpret in detail.” How we generalise, make connections and synthesize, (metaphorically speaking) information must be infinitely more complex than code. A family member is currently researching machine learning – and the term itself indicates how difficult it is to get an A.I. application to really “learn,” as we do and to generalize and think as we might. It seems to me (as a lay person) more like a massive trial and error system on speed! But I cannot pretend to know as much as yourself or my family does.
        What does seem ironic to me is that we want machines to think and learn in order to make our lives easier and better – i.e. to progress, but at the same time we are terrified of the implications that might bring.
        (P.S. I have updated the post text to better differentiate between GPT and art generating A.I. – thanks for that ‘nudge,’ – a chatbot may not have made that error!!)

        Like

  9. Hi, Amanda – Thank you for this informative and thought-provoking post. Just for fun, (and what else did I have to do on a Sunday afternoon), I cut and pasted your post into ChatGPT and asked it to give a response. Here was its reply:
    “Your article raises thought-provoking questions about the impact of AI on the writing profession, as well as on creativity and originality in general. While AI tools like ChatGPT can undoubtedly be helpful in certain tasks, such as generating basic content or answering questions, they are limited to the data they were trained on and can produce biased or harmful content. As such, it’s unlikely that AI will entirely replace human writers. However, there is a possibility that AI could standardize perspectives and limit intellectual progress if used too extensively. Ultimately, it’s up to humans to strike a balance between the convenience of technology and the importance of human creativity and originality.”
    I must confess that what I have read about ChatGPT so far, I have found fascinating (not all good, not all bad, but definitely fascinating).
    As an aside, I check my mailbox each weekday, still write many notes by hand, frequently use a street directory, recently bought a physical recipe book, know several phone numbers by heart and just today mentally calculated my grocery purchases at the store. I admit, that I am a weird mix of diehard old school while embracing AI and technology with much fascination and curiousity.

    Liked by 3 people

    1. Initially, Donna, I was reluctant to go near GPT but have to admit I was curious and a teeny bit surprised.
      I like that GPT’s designers are cognizant enough to write this: “Ultimately, it’s up to humans to strike a balance between the convenience of technology and the importance of human creativity and originality.” I note that Tildeb posted the similar content that you refer to!
      I am also pleased that some of us do like the old school ways where we can enjoy and see a benefit for us! This indicates that humans are sentient beings and are not completely ready to wholeheartedly accept new technology without questions. Keep your weird mix!

      Liked by 2 people

  10. I played around a bit with an AI program a few months ago and decided I have no use for it. Sure, it was factually accurate, but there was no passion, no personalization. If I need a robot to write my blog for me, then I’ll put the blog to bed permanently. I fear it will just further decrease our ability to think for ourselves.

    Like

    1. Computers already decrease our ability to think for ourselves, don’t they? People forget how to navigate following their nose, so reliant are they now on GPS software. And it gets it wrong still! I quite like reading a map and determining the best route. I miss the referdexes – I am not sure what you call your street directories – the old hard copy versions, that is. (Comment rescued from spam)

      Like

      1. Indeed they do! Instead of trying to remember how to spell a word, we type whatever we think and let the computer spell it for us. I find that instead of doing the math in my head as I used to be quite capable of doing (a career as an accountant), I now keep a calculator app on my home screen and use it when I need to know what 2+2 is equal to! Well, okay, that may be an exaggeration, but you get the gist. Oh yes, I remember one trip to visit a friend in Pennsylvania … I don’t have GPS in my car, but I had rented a car for this trip and it did have GPS … the GPS led me right to the end of a dirt road with nowhere to go but back! As re the street directories, we just call them ‘maps’ … and most people today don’t know how to even read one! (Thanks for rescuing my comment!)

        Like

      2. Your experiences with Maps/street directories sound all too familiar. Yes I do think we are getting lazier. This is not so good for us. We won’t be around to see the implications of this in future generations, but some of us can see that it doesn’t bode well. If our brains become smaller, or we become less able to think independently, will technology take over more and more tasks, and if so, what then do we do?

        Like

      3. No, it’s not a good thing for us. We are handing our brains over to technology … and someday, that technology will fail and … will we still be able to think for ourselves? If you want a prescient yet frightening look at the possibilities, I recomment William Forstchen’s book, “One Second After”. All too possible …

        Like

      4. I will look up that book, One second After. Thank you for the recommendation, as frightening as the prospect may be. My son who works in this area, does not like the direction Ai is going at all.
        (Again, I have unspammed this comment)

        Like

      5. It is definitely an eye-opening book and provides much food for thought, for the scenario is one that could very well happen someday. Thanks for rescuing me from spam!!!

        Like

  11. I have to agree with you point about AI polarizing content. Even without AI generating context, we see polarization when presentation engines (it pains me to call them ‘algorithms’ ) filter news feeds according to viewer preference.

    Interesting links and references here Amanda. It’s a topic that deserves a lot of reading to be informed.

    Liked by 1 person

    1. It is a can of worms, isnt it, Sandy! But thanks ever so much for kicking off the discussion. We need much more discussion about new technologies if we are to keep up with them and the perhaps Big Tech Gurus driving them – who often seem like and I hope are not puppeteers pulling us this way and that.

      Liked by 1 person

  12. As always your post is thought provoking. Two unrelated thoughts came to me while reading this.
    First: The other day I rattled off a phone number from my childhood. The woman who lives next door to my dad now (my godmother and the mother of a close childhood friend) is the same as it was over 50 years ago…except that we only had to dial 5 digits back then. Now we have to dial ten digits. My son’s Chinese phone number has fourteen digits. Memorizing a phone number ain’t what it used to be.
    Second: I have been learning to do 3-D modeling and animation lately and have been wondering if the “photo realism” standard should be what we strive for. Isn’t it better to make sure that we are being clear what is real an what is fantasy? It seems like that is the crux of the matter with AI…and many other things right now. News sources “creating narratives” instead of just saying what happened is an example of what I mean.
    In the mean time, I am struggling to make my leprechaun functional without weird glitches where his feet stick out from his boots…yet that is a form of realism. Nothing is ever simple is it?

    Liked by 1 person

    1. Funny how those old phone numbers stick in our permanent memory.
      But you are right – nothing in those spaces is simple, Xingfu. I think you raise a fantastic point about art and digital creation. It makes me think about artists themselves and the creation of art and its function. I have a story I heard about an art student who laboriously created a realistic painting – the subject of which I can’t quite remember. It may have even been a still life, but the point was that after many hours and hours of work she presented her work to the teacher. She was pleased with her effort beacause it was so life-like, so realistic. Her teacher wasn’t impressed and admonished her publicly in class, saying, “If I had wanted a photograph [of the object], I would have used a camera.” Art can be about conveying a message via our individual interpretation. If we are simply reproducing real life, it might become meaningless, because we can see that in real life anyway or via a photograph. Can we apply this to animation too? Interpretation and analysis (as Jane mentioned) is still quite the human’s domain.

      Liked by 1 person

  13. The cat is out of the bag indeed, and I still don’t know what I think of it. Part of me is tired of constantly adapting to new technology. Can’t they just focus on curing cancer, do they really have to keep inventing more and more apps, more and more of these addicitive platforms? 🙄 I’m not hopping with enthusiasm. But sure, ChatGPT can be of assistance for example for a marketer who is not a good or fast writer: now they are the whole package because they have a tool to write campaign texts. The world was already so full of unoriginal how to -pieces and ”5 things you need to know” that I don’t really know if computer-generated text will even effect quality on average….
    Anyway, this is an interesting topic and I was very happy to read your analytical take on it! 😊

    Liked by 1 person

    1. I am interested in your perspective, Snow as you are working at the coal face! So it is fascinating that you don’t feel it will lower the quality as it was on a slippery slope anyway. (If that indeed was what you meant). I am a not fast at writing unless I have a burning issue I want to get out so maybe GPT will be of use to me. I am a bit hesitatnt to use it, in case it makes me a bit lazy in forming my words. I think my editor is already using it to re-write media releases when the team is pressed for time.
      Your question about more and more apps is valid. But money talks. Apps can potentially make money but only for the app owner. The guys who work in app development are burnt out very quickly – as they are pushed to the brink with deadlines. My son refuses to work in that sector of the industry for that reason. Improving the human condition sounds like a much more altruistic and nurturing goal than making more money for billionaire start ups/tech giants.

      Liked by 2 people

      1. Yes, I think quality was already on a slippery slope. As a writer, I am concerned for the copyrights issue though, which all the marketing people seem to be ignoring while enthusing over the tool. The world has become a wild place!

        Like

      2. …I’m just looking at simpler.ai which I heard that some marketers use. Just the front page, I’m not going to sign up or anything. But they advertise that you can let AI write or finish your blog post. I know many people will take advantage of this. But what’s the point? AI creating content for, ultimately, AI, because another user will use the first AI text to create their AI text…. 🤯 I mean, what about the joy of writing? The pleasure you get when you hit the right kind of creative flow?

        Like

  14. I was talking with some friends recently about this subject. When I expressed my concerns I received pushback along the lines of “it’s fun” “it’s interesting to play with” and “I’m not old so I embrace new technology” (they were my age). My point was that the big tech corporations (google, apple, facebook, etc.) are not our friends and they will use this technology to benefit themselves and their shareholders. My friends’ responses reminded me of those who happily fill out online questionnaires and challenges with all sorts of personal information, cuz “it’s fun.” I know that AI is here to stay and will only get more powerful… I just think we need to remain cautious.

    Liked by 4 people

    1. You are sensible to be cautious, Janis. Your friends who are looking at technology as recreational, remind me of the citizens of Copenhagen, Denmark in the 19th century. The King at the time was concerned with the growing hordes of peasants becoming indignant of their condition and possibly mounting a revolution. the likes of which had happened in France – i.e. dethroning the monarchy. He came up with the idea of building an amusement park in Copenhagen, accessibly to everyone, as I am sure you are aware. A park that has music, rides and theatre to amuse and keep the population happily complacent and distracted from their atrocious living conditions. It worked and the monarchy in Denmark survived. While it is fun to play with new technology, if noone is concerned about the rationale for A.I. and the end point of new technologies, we risk becoming deluded and yes, happy, but stupidly compliant or ignorant about our own future.

      Liked by 2 people

  15. These are the early days of AI. The early days of computers were probably more underwhelming and now they fit in our hands and our lives revolve around them. I suspect the development of AI, for better or worse, will proceed at a faster pace.

    Liked by 4 people

  16. Great post Amanda and some very relevant points. I recently used to AI block to create a poem but decided it wasn’t me, but you can copy and paste and edit some of the lines, just to give you a start. I’ve experimented with paragraphs and decided not to go ahead as it didn’t sound like me!
    It’s going to be hard as Keith pointed out above to flush out cheaters in exams
    Probably here to stay though

    Liked by 2 people

    1. Interesting examples Alison! Thank you for sharing your experience and I think you give a poignant example that highlights that AI has a generic voice. Writers usually have very individual voices: perspectives that differ in tone and message conveyed.
      Your individuality and potentially ypur worth as a writer won’t be reflected in AI tools. Furthermore, this alludes to my concern about standardisation of writing. One voice instead of many, One thought line instead of many one message instead of a diverse messages. Leading where – to one catastrophe – possibly. Having said that, AI may help those for whom words are anathema and could be a trigger for more diverse thought.

      Liked by 2 people

      1. It did help me create a funny little poem for my son’s girlfriend’s birthday card, with a mix of my own words. I’m not sure what she will think!

        Liked by 1 person

  17. I never fully recovered from George Orwell’s 1984 (really) so discussions like this always give me a chill. I like to think that no AI could replace the human brain but dang, it is terrifyingly sophisticated! I don’t think we can go back: the cat is well and truly out of the bag. It saddens me, though. And it’s a shame: AI can indeed be helpful. Just yesterday, I translated a text into French in seconds. The translation is FAR from perfect, but the algorithms have clearly been ‘learning’ as it is much better than it would have been say, 10 years ago. I remember having to spend HOURS translating a text. Now, it is (very imperfectly) translated with a few mouse clicks and you just have to do the (sometimes long) work of correcting the bits the algorithm got wrong. Still, though… it will always creep me out a little…

    Liked by 2 people

    1. I hear you, Patti and there is an underlying voice in me that thinks similarly. Especially when science fiction stories are the trigger for developing new technologies. I sometimes have to ask whether we have some blinkers on. However, there are also many indeniable benefits to new tech that we can’t ignore, as you mentioned. Things that make life infinitely easier. Everyone wants an easier life. That is our aim it seems. But in doing so, is this actually eroding the experience of what living life is and giving us more time to ponder more esoteric or emotional dreamy concerns and forget how we did things before? Will the ability to translate become so rare that we also slowly begin to lose the ability to detect and correct AI errors while, at the same time AI gets better and better? In that respect, AI creeps me out too!

      Liked by 3 people

  18. such a great post – and I am reading this after your other (more recent) post, and so have a lot to think about – but a key point to ponder from this post was this:
    “if we can separate the digital from reality” – because that could be a good thing…. but so much to think about

    Liked by 1 person

    1. If we can separate the digital from the reality and as technology improves, that line blurs and is the stuff of science fiction. If technology can make us lazy, will a potential outcome be that our brains can no longer tell the difference between digital content and reality? That is a brave new world….

      Liked by 1 person

      1. The more I find out about ChatGPT4, the more worried I become. It can alter its own code, BTW, and there is zero regulation about its use. For example, you can receive stock answers to all kinds of nefarious questions and think it might be safe. But order the program to ignore these guardrails and then ask again to receive much different and detailed answers. It’s very worrying on this practical level alone.

        But as an educator, there’s more to be concerned about…

        Writing, as I have previously mentioned, is crafted thought. By working with our thoughts this way, we clarify and refine what we think. We draw on vocabulary and fist, second, and third person experiences. We make additional meaning by shaping it in certain ways especially considering the intended target audience and using a host of affiliated knowledge to use various syntax and pragmatics. Humour, irony, satire, and so on. These use various parts of the brain that are different from where simply spoken stream-of-consciousness words come from. When we turn to something like ChatGPT4 to do this written ‘work’ for us, I think we’re losing something very human for this gain in time management and efficiency in writing. Plus, it’s so much easier.

        Reading alone does not build the brain the same way as writing does. But the two are deeply connected so how will our reading skills decline when our writing skills are transferred to AI? How much comprehension will be lost when we lack the intellectual muscle built that was once built up over time by reading, writing, and making the kind of creative connections used for critical thinking?

        I suspect we’re well on our way to Idiocracy.

        Like

  19. From ChatGPT about ‘learning’ from its training materials:

    “It’s up to the users who interact with me to critically evaluate the information they receive and to seek out reliable sources of information.”

    “My training data does not dictate the accuracy or truthfulness of the responses I generate.”

    “It is up to humans to evaluate the content for accuracy, truthfulness, and reliability.”

    “It’s possible that I may generate responses based on unreliable or inaccurate information that has been widely circulated on the internet.”

    “It’s important to note that I am not a replacement for critical thinking or careful evaluation of information.”

    Once again, we are left relying on how well our education systems teach critical thinking. And they are failing… spectacularly. (Functional illiteracy has risen in Canada from the low 30% to about 40% in the past ten years. Over 50% of Americans read below grade 6 level. Over 40% of Australians read at level 1 or 2. And so on. Critical thinking and careful evaluation of text obviously relies on functional literacy. Now if only I could find a teacher who can define what critical thinking even means…)

    Liked by 1 person

    1. The stats given re critical thinking are worrying but they are not news to me. I notice plenty of ignorance in the older generations who accept everything they hear on syndicated network news that is largely opinion-based rubbish and sport. So in that regard, younger generations are more aware of questioing sources and distinguishing fact from opinion. However, the plethora of information on the net might mean the subtle – subliminal – if you will, influence of repetition amplifies latent bias and cognitive dissonance. I think users of Chatgpt should be made write an essay on the above information before use! Haha! As our brains use critical analysis less, that ability to question will perhaps shrink further. That’s concerning. Educators have a huge task in prevent that.

      Liked by 1 person

Comments are closed.