Chatting about ChatGPT: How may AI and GPT impact academia and libraries?

Authors

  • Marchel Universitas Nahdlatul Ulama Surabaya Author
  • Septi Universitas Narotama Author
  • Pratama Universitas Hang Tuah Translator

Keywords:

ChatGPT, GPT-3, Generative Pre-Trained Transformer, AI, Academia, Libraries

Abstract

This paper provides an overview of key definitions related to ChatGPT, a public tool developed 
by OpenAI, and its underlying technology, GPT. The paper discusses the history and technology 
of GPT, including its generative pre-trained transformer model, its ability to perform a wide 
range of language-based tasks, and how ChatGPT utilizes this technology to function as a 
sophisticated chatbot. Additionally, the paper includes an interview with ChatGPT on its 
potential impact on academia and libraries. The interview discusses the benefits of ChatGPT 
such as improving search and discovery, reference and information services, cataloging and 
metadata generation, and content creation, as well as the ethical considerations that need to be 
taken into account, such as privacy and bias. The paper also explores the possibility of using 
ChatGPT for writing scholarly papers

Downloads

Download data is not yet available.

References

Bishop, C. M. (1994). Neural networks and their applications. Review of Scientific Instruments,

, article 1803. https://doi.org/10.1063/1.1144830

Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W.

(2016). Openai gym. arXiv. https://doi.org/10.48550/arXiv.1606.01540

Budzianowski, P., & Vulić, I. (2019). Hello, it's GPT-2--how can I help you? towards the use of

pretrained language models for task-oriented dialogue systems. arXiv.

https://doi.org/10.48550/arXiv.1907.05774

Cherian, A., Peng, K. C., Lohit, S., Smith, K., & Tenenbaum, J. B. (2022). Are Deep Neural

Networks SMARTer than Second Graders?. arXiv. https://doi.org/10.48550/arXiv.2212.09993

Dale, R. (2017). NLP in a post-truth world. Natural Language Engineering, 23(2), 319-324.

Dale, R. (2021). GPT-3 What’s it good for? Natural Language Engineering, 27(1), 113-118.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep

bidirectional transformers for language understanding. arXiv.

https://doi.org/10.48550/arXiv.1810.04805

Erhan, D., Bengio, Y., Courville, A., Manzagol, P., & Vincent, P. (2010). Why does

unsupervised pre-training help deep learning. Journal of Machine Learning Research, 11, 625-

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and

Machines, 30(4), 681-694.

Goh, G., Cammarata, N., Voss, C., Carter, S., Petrov, M., Schubert, L., Radford, A., & Olah, C.

(2021). Multimodal neurons in artificial neural networks. Retrieved from

https://doi.org/10.23915/distill.00030

King, M. R. (2022). The future of AI in medicine: A perspective from a chatbot. Annals of

Biomedical Engineering. https://doi.org/10.1007/s10439-022-03121-w

Kirmani, A. R. (2022). Artificial intelligence-enabled science poetry. ACS Energy Letters, 8,

-576.

Lee, C., Panda, P., Srinivasan, G., & Roy, K. (2018). Training deep spiking convolutional neural

networks with STDP-based unsupervised pre-training followed by supervised fine-tuning.

Frontiers in Neuroscience, 12, article 435.

Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., & Tang, J. (2021). GPT understands,

too. arXiv. https://doi.org/10.48550/arXiv.2103.10385

Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories.

Proceedings of the Workshop on Narrative Understanding, 3, 48-55.

MacNeil, S., Tran, A., Mogil, D., Bernstein, S., Ross, E., & Huang, Z. (2022). Generating

diverse code explanations using the GPT-3 large language model. Proceedings of the ACM

Conference on International Computing Education Research, 2, 37-39.

Manning, C., & Schutze, H. (1999). Foundations of statistical natural language processing. MIT

Press.

Marcus, G., Davis, E., & Aaronson, S. (2022). A very preliminary analysis of DALL-E 2. ArXiv

pre-print. Retrieved from https://doi.org/10.48550/arXiv.2204.13807

Mollman, S. (2022). ChatGPT gained 1 million users in under a week. Retrieved from

https://www.yahoo.com/lifestyle/chatgpt-gained-1-million-followers

Niu, Z., Zhong, G., & Yu, H. (2021). A review on the attention mechanism of deep learning.

Neurocomputing, 452, 48-62.

OpenAI. (2022). OpenAI about page. Retrieved from https://openai.com/about/

Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative

artificial intelligence for journalism and media education. Journalism and Mass Communication

Educator. https://doi.org/10.1177/10776958221149577

Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language

understanding by generative pre-training. Retrieved from

https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep

learning in NLP. Proceedings of the Annual Meeting of the Association for Computational

Linguistics, 57, 3645-3650.

Zhou, X., Chen, Z., Jin, X., & Wang, W. Y. (2021). HULK: An energy efficiency benchmark

platform for responsible natural language processing. Proceedings of the Conference of the

European Chapter of the Association for Computational Linguistics: System Demonstrations, 16,

-336.

Published

2024-08-09

Issue

Section

Articles