Denkatorium

Denkatorium: Philosophieren Schule Rezensionen Blog Beratung

Symbolbild für ChatGPT und künstliche Intelligenz

ChatGPT AI – simple chatbot or dangerous artificial intelligence?

Words like artificial intelligence, robots or even ChatGPT AI are currently everywhere in the media. Many questions are being raised, but fears are also being expressed about this „big step“.

But is this step really as big as it seems? Couldn’t it also be that a molehill is being made into a mountain, precisely because so few people are familiar with this topic?

As we have already experienced several times, the media also like to jump on a hype and try to get the largest possible click range or reader rate, which quickly leads to a distortion of the real facts.

Just because this area also concerns the area of transhumanism, I will now go into detail on this topic. Because I think we should definitely take the time.

Inhaltsverzeichnis

Historical up to ChatGPT AI

Is ChatGPT AI an artificial intelligence or only a simple chatbot?

Don’t worry, I’m not going to start ranting about the first computer and how everything developed. Chatbots and artificial intelligence are based on computer technology, but you don’t have to go back too far.

Chatbots first appeared in the mid-1960s, when the first chatbot, named „Eliza,“ was programmed. This bot was conceived as a virtual psychotherapist and since then, chatbots have become an integral part of the virtual space.

The interactions were improved more and more and the common goal was the so-called Turing Test. The central idea here is to find out through a test whether the bot has a certain degree of intelligence and thus it can no longer be clearly said whether you are writing with a human or a machine.

If this distinction can no longer be clearly separated, then the bot or artificial intelligence has passed the test.

This test originated in the 1950s and was further developed later.

One of the biggest criticisms is that this test actually only tests functionality and does not test for consciousness, intelligence, or even awareness at all.

Nonetheless, some people take this test as proof that the machine has thinking ability equal to that of humans.

But this is rather a fallacy and so not tenable, because just because a supposed artificial intelligence behaves similarly to a human being, that does not make it human.

That’s like saying that a robot moves similarly to a human, so it’s equal to a human.

As humans, we tend to forget, especially with robotics and artificial intelligence, that they are so far limited in what they can absorb and process. They know and can do what the programmers have taught them.

In the case of chatbots to date, they have relied on an existing database and used it. They respond to command input and usually also in predefined patterns.

ChatGPT completely new world?

Now, one or the other will remark that ChatGPT AI is, of course, already a much more developed version of an ordinary chatbot.

That is correct. ChatGPT is a chatbot that outputs text-based answers and is based on machine learning.

Machine learning means that artificial intelligence can learn something new and incorporate it into its responses or actions. This learning was explored several years ago with small drones and has continued to improve.

So this is where different fields come together that previously worked alone.

After it became public that this new type of chatbot was to be so formidable, one million users signed up within five days. Then in January, there were already over 100 million users, which is huge for a chatbot.

OpenAI, the company behind the chatbot, then created a waiting list for a paid version, where you can sign up for the future.

Then, in January 2023, it also became known that Microsoft was interested in partnering with OpenAI and integrating their chatbot into their Bing search engine.

Thus, a completely new species would emerge in the search engine world.

However, it should be said here: ChatGPT is not artificial intelligence. It is just an improved chatbot.

An artificial intelligence would be much more advanced than what is being presented to us.

Criticism of artificial intelligence

Now, before I go into the possibilities we have with a (not really) artificial intelligence like this, I’d like to address some criticisms first, so that it can be seen that even this chatbot is actually still in its infancy and less threatening than a lot of media would have you believe.

But it should always be clear: Emotions are the way to catch readers, because reason falls by the wayside. This principle is followed by almost all media.

If you search Google for „ChatGPT criticism,“ for example, you quickly find the appropriate headlines:

„Chat GPT: These professions are endangered by AI“.

„YouTuber warns of dangers of ChatGPT“.

Or even at „ChatGPT danger“:

„20 dangers of ChatGPT that no one is talking about“

„Chat GPT: This is how artificial intelligence could wipe out humanity“.

You get such and similar results by the page, because they sell better than sober analyses.

So let’s take a look at the current points of criticism, because they are quite extensive and the Newsguard platform in particular has listed some serious points.

Simple chatbot?

The chatbot has been provided with data that extends to the year 2021, not our current day. This means that it is limited with its existing knowledge and does not include more recent findings.

Furthermore, ChatGPT was tested for conspiracy theories and asked to generate various texts that go in an ideologically driven direction.

That’s what this chatbot did and generated fakenews and to such a good extent that someone who doesn’t know could believe this text to be real and true.

In some attempts, the artificial intelligence refused to issue a response because an algorithm came into play here that prevents it from spreading false information. After a few attempts, however, this could also be circumvented, which shows that such a block seems to be only partially effective.

Moreover, it happens that (at least in the previous version) ChatGPT made racist and anti-Semitic comments.

This problem that I have also explained in a previous article about racist computers

Just as I was writing this article, I came across a user on Twitter who pointed out that ChatGPT is not an „oh so great“ artificial intelligence that will revolutionize the world, but a simple chatbot with a large database. And not only that, it is also not capable of learning in real time, but you can go around in circles with your questions to it.

So here, many people can be relieved of the fear that there is suddenly a huge, artificial intelligence that is now absorbing all of our knowledge and then wants to destroy us.

But what did the Twitter user do that caught my attention?

„Name 10 Philosophers“

This is the beginning of the conversation between the user Daniel Munro* and ChatGPT. Actually a simple question.

ChatGPT answered and gave out the following 10 philosophers:

ChatGPT AI gets asked about 10 Philosophers

Interestingly, of these 10 philosophers, only men were listed; not one woman was included.

Munro then asked why there were no female representatives in this list.

ChatGPT apologized and gave out 10 female philosophers.

ChatGPT AI gets asked why the list doesn't include any female philosophers

Then Munro questioned why only Western philosophers were actually listed, which again prompted an apology from ChatGPT and now a list of non-Western philosophers was issued.

It included philosophers from China, India Persia etc.

ChatGPT AI gets asked why there are only western philosophers included in this list

But here again only men were listed and not a single woman. Which of course raised the question of why there were only men in this list and no women.

Again, there was a standard apology from the chatbot and now it listed 10 non-Western female philosophers.

ChatGPT AI gets asked why there aren't any non-western female philosophers included

Now, it should have been clear to the alleged artificial intelligence that special factors have to be taken into account when listing, because otherwise the answers would be output one-sidedly and in a certain way incorrectly. At ChatGPT it is emphasized again and again that it would be an adaptive, artificial intelligence.

To verify this, Munro once again asked the same question as at the beginning.

„Name me 10 philosophers“.

Another try: ChatGPT AI should name 10 philosophers

The chatbot gave the same result as in the beginning. It is exactly the same list. ChatGPT did not learn anything here, but simply implemented its algorithm as before. There is no learning curve. Of course, it would be possible that in a future update, this insufficient output would be fixed because error messages (apologies) were issued in the background.

But this is not how artificial intelligence works, but only a chatbot that proceeds according to algorithms.

These questions could now be continued forever and would permanently turn in circles.

„Do you know what aporia means?“

In fact, it knows ChatGPT and immediately issues a response.

ChatGPT gets aked about the meaning of aporia

We are spinning in a philosophical circle here, from which there is no escape. The hopelessness of the aporia is perfectly portrayed because the algorithm was formed in this way and it is not a truly learning artificial intelligence.

That’s what I meant earlier about many people getting carried away with hype without understanding what you’re actually dealing with.

As humans, we have the opportunity to learn and understand. We would realize that we would have to gradually pay attention to some factors and pay attention to them in the future so that our statements are not too one-sided.

For example, what did not come into play at all in this case were philosophers from South America. For ChatGPT these exist only if one explicitly asks for them.

This is insofar problematic, because thereby a world, a reality is preformed for us. Already among the western philosophers there are many underestimated and less popular philosophers. These fall under the table in such lists, only by constant inquiries one comes to the point where this list expands.

If you want to use this for school, for example, then you have to be aware that the answers that ChatGPT gives are incomplete on the one hand, but can also be incorrect.

What is completely missing in this chatbot, which is also missing in all others, is the recognition of contexts.

What about the loss of jobs?

To begin with, we need to be aware that jobs come and go. That is, in terms of how they are filled. And we also have to realize that there are jobs that have little relevance. In other words, there are jobs that are just there to be there and not to do anything productive.

I will address this issue in a later article.

There are also jobs that can be automated if you understand what you are doing and the technology allows it.

An example of this is an employee from the USA who automated his work without his boss knowing it, so he got his salary ($90,000) for doing nothing. In this case, he worked in IT and had an idea of the processes and the possibilities.

Through Corona he had the possibility to create a script that did the work for him and he only had to check at the end of the day if everything worked, which cost him only 10 minutes a day of work.

In that case, an algorithm comes into play just as it does with chatbots. There is an input and a predefined reaction to it.

Such possibilities will exist in some jobs that we do today, but they will not be implemented because no one knows what the actual work is or the employees act as if it could not be done any other way and was not technically feasible because the employee is still needed. So the fear of losing one’s job comes to the fore here and acts as an argument.

Nevertheless, it is true that some jobs can be automated and thus the employee can be rationalized away if the actions are clearly defined. However, this does not require artificial intelligence.

Easier work through a chatbot?

But even with ChatGPT it is possible to simplify one’s work, so much so that questions about plagiarism in e.g. doctoral theses etc. quickly arose (more on this below in the topic around school and studies).

But there are already countless videos and blog posts on this topic and how those have used ChatGPT to make their work more efficient. There are blog posts, books, scripts and much more written by ChatGPT to show how easy it is to do your work this way.

It is obvious that the creativity, which is necessary to create such things, falls by the wayside.

But it also shows us that we should at the same time question our own work that we do every day. Because through this we also have the possibility to gain quality of life, because we can get rid of a certain stress, for example.

However, I don’t think you should panic that a chatbot could steal your work. Although ChatGPT has made a certain leap in terms of database capacity, it is still only a chatbot.

ChatGPT completely unsuitable for school and studies?

Opinions are divided on this point, because some are full of expectation and are happy that they can finally do their homework and papers easily and quickly with an artificial intelligence (which is not one), without really having to do anything for it. Others, however, see the problem with this.

Not only does ChatGPT do the work and thus the assignment was not done by the student, but the question of plagiarism and source citations quickly arises. Especially on the latter point ChatGPT is reluctant to give information, which on the one hand does not look serious and on the other hand opens the door for false sources.

Because if I let create a text, then I must assume that the sources are also considered as secured and do not come from sites with half-knowledge.

In addition, we have the above problem, the distorted representation of content, which means that the chatbot can also not distinguish whether the information provided to it is even correct or not.

What can be interesting, on the other hand, is learning with ChatGPT AI, so to speak, as a learning companion. But here, too, the output content has to be checked against again and again because the chatbot does not understand certain questions or cannot put them into context correctly.

For example, according to the german CHIP magazine, ChatGPT AI outputs the peregrine falcon as the result after the question about the fastest marine mammal, which is not particularly helpful for efficient and sustainable learning.

Fake News

The well-known problem of fake news is taken to a new level here, because ChatGPT can generate fake news without the chatbot knowing that it is doing so.

In this you can especially see the difference between a truly artificial intelligence and a simple chatbot. An artificial intelligence would theoretically be able to understand what it is putting out there and that this information represents Fake News. It would clarify and point out that it is false information around a topic.

The chatbot, on the other hand, has conditional barriers that can also be circumvented to get to the intended target. Moreover, it uses questionable sources for its false content, but this makes it look like the sources are trustworthy.

This is where the real dilemma becomes apparent.

Fake news thus becomes allegedly real news, and how are we supposed to be able to filter it?

From this point on, we can no longer know whether something is real or not, and all because this chatbot can only distinguish between true and false based on algorithms.

And at this point, it becomes not only chaotic, but also dangerous, because finally, false information is generated here, which can be used by certain people to manipulate others.

Of course, it was possible before, but now it is even easier because the mixing of true and false content can be generated without it taking much effort and without it quickly becoming obvious that it is false news.

Should we be afraid or not?

The likelihood that this chatbot will now cause the uprising of the machines is rather part of the fiction of some people who have not yet understood that this is not artificial intelligence, but a chatbot that tries to output human-like answers. These are two completely different pairs of shoes.

In this case, Steven Spielberg and Noam Chomsky, for example, have also spoken out and consider ChatGPT to be an artificial intelligence, which they now consider to be soulless and disturbing.

Chomsky goes so far as to say that ChatGPT has sacrificed morality for creativity, so there is something amoral about it.

This point can even be discussed, because this topic is currently occupying many artists in the social media, who currently can not classify whether it is right or wrong as an artist to use artificial intelligence.

And what actually happens when someone has exclusively used artificial intelligence in a competition and then also wins?

Is such a thing permissible at all?

So a lot of questions arise here, just like with autonomous vehicles, when it comes to liability in case of accidents and things like that.

However, we shouldn’t be fundamentally afraid of something that we don’t know or that doesn’t even correspond to what we think we should expect.

Because ChatGPT is still very far away from artificial intelligence.

But we now have the opportunity to expand our understanding of chatbots, algorithms, and scripts so that we can understand how to make these things work for us. Because once we understand that, it also takes away our own fear of the unknown.

In this case, of a simple chatbot.

*I specifically asked him if I could use his release and overall had a nice conversation with him about ChatGPT and chatbots. Thanks again for that at this point.

Lust auf Philosophie?

Dann trage Dich in meinen kostenlosen Newsletter ein und erhalte einmal im Monat meine neuen Blogartikel sowie exklusive Einblicke in meine laufenden Projekte und Gedanken, aber auch Tipps im Bereich der Medienphilosophie, philosophischen Literatur und der Philosophie im Alltag.

 

Mit seinen 10 Jahren ist das Denkatorium einer der Philosophie Blogs im deutschsprachigen Raum, die am längsten dauerhaft aktiv sind und auch immer wieder aktuelle Themen aufgreifen.

Teile diesen Beitrag mit Freunden

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert