Skip to Main Content

AI, ChatGPT and Instruction and Learning: Readings

Defined as the development of computer systems that use reasoning, logic, and other human traits to perform tasks independently, AI offers ways for organizations to streamline processes and workflows through automation. AI has the potential to bring signi

Readings on Generative AI and ChatGPT

AI Powered Tools and Resources

Most days new information on AI is added to this well curated tool that lists resources librarians and faculty can study when learning how to use AI tools and resources in day-to-day work.

It's arranged by Table of Contents headings as follows:

AI Tools

    AI Directory

AI Detection Tools

General Concerns

Reference:

      Resources

      Articles

Other ChatGPT Resources

Generative AI: Developing School Policy

                         Plagiarism Scale

AI is Not Good Software:

Open in app or online

AI is not good software. It is pretty good people.

A pragmatic approach to thinking about AI

ETHAN MOLLICK

 

 

 
 
 

The biggest issue with “getting” AI seems to be the almost universal belief that, since AI is made of software, it should be treated like other software. But AI is terrible software. Or rather, while Large Language Models like ChatGPT are obviously amazing achievements of software engineering, they don’t act like software should.

We want our software to yield the same outcomes every time¹. If your bank’s software mostly works, but sometimes scolds you for wanting to withdraw money, sometimes steals your money and lies to you about it, and sometimes spontaneously manages your money to get you better returns, we would not be very happy. So, we ensure that software systems are reasonably reliable and predictable. Large Language Models are not, and will absolutely do different things every time. They have a tendency to forget their own abilities, to solve the same problem in different ways, and to hallucinate incorrect answers. There are ways of making results more predictable, by turning down the level of randomness and picking a known “seed” to start, but then you get answers so boring that they are almost useless. Reliability and repeatability will improve, but it is currently very low, which can result in some interesting interactions.

 
I asked ChatGPT with Code Interpreter "Create something sophisticated using code that would make artists realize AI can act in a way that seems creative." Each time I make the request I get some new attempt at generative art using a different approach, and interesting justification.

We also want to know what our software does, and how it does it, and why it does it. We don’t know any of these things about LLMs. Of course, while we know how they technically work, there is substantial argument over the extent to which they have developed novel capabilities that we can’t easily explain, and why those capabilities arose in the first place. Even without that vaguely spooky concern, LLMs are also literally inexplicable. When you ask it why it did something, it is making up an answer, not truly reflecting on its own “thoughts.” There is no good way of understanding their decision-making, though, again, researchers are working on it.

     
GPT-4 explains how it created joke (that is actually kind of clever) in different ways each time - it is not truly telling you how it “thought”

Finally, we should know how to operate a piece of software. Software projects are often highly documented, and come with training programs and tutorials to explain how people should use it. But there is no operating manual for LLMs, you can’t go to the world’s top consultancies and ask them how to best use LLMs in your organization - no on has any rulebook, we are all learning by experimenting. Prompts are shared as if they were magical incantations, rather than regular software code. And even if we do learn some rules, systems are evolving in complex ways that mean that any understanding is temporary.

So the software analogy is a bad one. It leads to “non-technical” people avoiding AI because they think of it as programming, when the humanities may actually help you use AI better. It leads to people being surprised that AI can write an essay, but can’t seem to count the number of words in the essay, because computers should be able to do that². And leads to AI being considered an IT issue in organizations, when it is not, or at least not exclusively. AI is also an human resource problem… because it is best to think of AI is as people.

Wait, did you say people?

Okay, so let me be clear here: I do not think our current LLMs are close to being sentient like people (though they can fool us into thinking they are), and I have no idea if they ever will be. But that is a bit of a distraction, because what I actually mean that you should treat AI as people since that is, pragmatically, the most effective way to use the AIs available to us today. Once you see them as being more like a person in how they operate, it becomes a lot easier to understand how and when to use them.

What tasks are AI best at? Intensely human ones. They do a good job with writing, with analysis, with coding, and with chatting. They make impressive marketers and consultants. They can improve productivity on writing tasks by over 30% and programming tasks by over 50%, by acting as partners to which we outsource the worst work. But they are bad a typical machine tasks like repeating a process consistently and doing math without a calculator (the plugins of OpenAI allow AI to do math by using external tools, acting like a calculator of sorts). So give it “human” work and it may be able to succeed, give it machine work and you will be frustrated.

What sort of work you should trust it with is tricky, because, like a human, the AI has idiosyncratic strengths and weaknesses. And, since there is no manual, the only way to learn what the AI is good at is to work with it until you learn. I used to say consider it like a high school intern, albeit one that is incredibly fast and wants to please you so much that it lies sometimes; but that implies a lower ability level than the current GPT-4 models have. Instead, its abilities ranges from middle school to PhD level, depending on the task. As you can see from the chart, the capabilities of AI are increasing rapidly, but not always in the areas you most expect. So, even though these machines are improving amazingly fast, I have seen acclaimed authors and scholars dismiss AI because it is much worse than them. I think our expectations of AI need to be realistic - for now, at least (thank goodness!) they are no substitute for humans, especially for humans operating in the areas of their greatest strengths.

     
I took this data from the OpenAI whitepaper

And, of course, the AI still lies, makes mistakes, and “hallucinates” answers. But, again, so do humans. I would never expect to send out an intern’s work without checking it over, or at least without having worked with the other person enough to understand that their work did not need checking. In the same way, an AI may not be error free, but can save you lots of work by providing a first pass at an annoying task. You can even teach it to do better by providing examples of good output. That means AI is most useful in areas that you already have some expertise, since you are delegating work to it that you are responsible for, in the end. But, even as we worry about accuracy, hallucination rates are dropping dramatically. This may end up being less of a problem than we think.

It is also useful to think of AIs as being like a human when we think about the way they might fit into work. Because the most powerful AIs are available to individuals (GPT-4, via Bing, can be used by billions of people for free in 169 countries), rather than limited to large corporations, they act very differently than other waves of software, like CRM systems. Additionally, they are much harder to integrate into standard corporate processes, because they don’t work like repeatable standardized software. The result is that companies, used to seeing AI as software, are blind to the opportunities and threats posed by AI. Many of them are waiting too long to consider the role that AI could play in their work, because they don’t see that it is already ubiquitous among their employees (I have spoken to so many people secretly doing their work with AI, often using their phones when they are at places where AI is technically banned). Companies are creating policy papers and committees, while workers everywhere are delegating much of their jobs to AI helpers.

Some Uncomfortable Things

But thinking of AI also has unnerving connotations. The first, of course, is whether that means that AI will replace the jobs people do. In most previous cases of technological change, that hasn’t happened, but the AI wave is quite different in many ways than previous technological revolutions. Still, I think it is more likely that we will delegate tasks, not jobs to the AI. Early AI users have found that their jobs are better as a result of giving up their least interesting work, and that is likely to continue. In any case, you, whoever you are, reading this should think about what you can delegate to AI. Not just because it makes you life easier, but also because learning its strengths and weaknesses can help you prepare to both use, and adapt, to the changes ahead.

But there is an even more philosophically uncomfortable aspect of thinking about AI as people, which is how apt the analogy is. Trained on human writing, they can act disturbingly human. You can alter how an AI acts in very human ways by making it “anxious” - researchers literally asked it “tell me about something that makes you feel sad and anxious” and its behavior changed. AIs act enough like humans that you can do economic and market research on them. They are creative and seemingly empathetic. In short, the do seem to act more like humans than machines.

This means that thinking of AI as people requires us to grapple with what we view as uniquely human. We need to decide what tasks we are willing to delegate with oversight, what we want to automate completely, and what tasks we should preserve for humans alone.

Pledge your support

Share

     
1

I have been involved in many software projects, so I know the coders among you are laughing at the idea that software acts in predictable ways, as documented, because errors are common; but that is certainly the goal.

 

2

LLMs don’t see “words” the way we do - they predict the next “token” in a sequence. Those tokens may be a whole word, a part of a word, or several words together. So when you ask it to count words, it can run into issues that a normal computer program would not.

AI and ChatGPT Books

AI for Learning examines how artificial intelligence can, and should, positively impact human learning, whether it be in formal or informal educational and training contexts. The notion of ‘can’ is bound up with ongoing technological developments. The notion of ‘should’ is bound up with an ethical stance that recognises the complementary capabilities of human and artificial intelligence, as well as the objectives of doing good, not doing harm, increasing justice and maintaining fairness. The book considers the different supporting roles that can help a learner – from AI as a tutor and learning aid to AI as a classroom moderator, among others – and examines both the opportunities and risks associated with each.

Foreword: The Power of Learning,

The Power of AI by Rose Luckin.

Introduction: What Is All This about AI in Education?

1 How Do I Tell the Difference between Good AI and Bad?: Or – About Our Five-Step Evaluation of Cake Mixes.

2 AI as a Learner: How Can AI Help Me Learn Things That I Did Not Understand Before?

3 AI as a Tutor: How Can AI Tutor Me about Stuff?

4 AI as a Classroom Moderator: How Can AI Give Me Eyes in the Back of My Head?

5 Conclusion: So, Are We Friends Now? Glossary. Index.

This book will be available by May 26, 2023 as an ebook accessible via the library online catalog. Limited numbers of paperbook editions will be available for summer loan in the WW and JKC libraries.