Skip to Main Content

Library Homepage

Links and passwords for subscription resources

What is AI?


There are many definitions of AI. Generally, AI refers to technology that can complete tasks that we think of as being "human", like recognizing images, making decisions, and producing speech and text.

Most of us use AI everyday, when we use Siri, Alexa, GPS guidance to get directions, or even get recommendations on what to watch next on Netflix or what to listen to next on music apps.

What is ChatGPT?


Chat GPT is an AI platform developed by OpenAI that uses large, connected networks of computers that have been "trained" on extremely large volumes of data, to predict and generate information in natural language. These kinds of models are called Large Language Models (LLMs). Other LLMs include Google Bard, Bing Chat, Anthropic-Claude, BLOOM, and LLaMA.

There are other types of AI that produce voice, photos, sound, and video and 3D printing projects. These are also becoming increasingly adept at producing and reproducing media that can be mistaken for human-created, "real" items.


Can I trust the information I get from AI like ChatGPT?


As with any source, you need to critically evaluate any information you get from AI platforms. The output may delivered confidently, however there are drawbacks to using LLMs to do research.

  • Limited knowledge - These AI applications have been trained on large amounts of data, but not all of that data is accurate, current, or reliable.
  • Lack of "common sense" - All LLMs are basically only capable of predicting words based on its training data. It is NOT capable of understanding of applying common sense to a situation.
  • Lack of context - AI applications provide information based only on the input you provide. It cannot provide contextual understanding.
  • Lack of accountability - AI applications can and do provide incorrect information. Even the technologists who create these systems do not know how they produce the information.
  • Lack of transparency - The corporations who create the AI platforms do not share the training data, nor do they provide reliable sources for the information they provide.
  • "Hallucinations" - When and AI application produces incorrect information, or simply makes things up, it is referred to as a "hallucination". Because the information provided often sounds authoritative, it can be hard to recognize.

Although many AI applications can seem as if they are intelligent, they are really just highly sophisticated algorithms that can produce text, photos, video, and sound that mimics what humans make. It is important to remember that these applications are built on algorithms. They do not possess what we think of as human intelligence. However, continued unregulated development of AI platforms have the potential to develop more sophisticated reasoning.


This page is adapted from Atlantic Techological University's Chat GPT & AI LibGuide and Texas A&M University - Commerce's AI in Higher Education LibGuide

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.Creative Commons BY-NC-SA license image