Login/ Portal For Embryologists For IVF Clinics Blog Download Contact Archive

Stumping ChatGPT with Fertility Knowledge

Stumping ChatGPT
Stumping ChatGPT with fertility knowledge.

Want to skip straight to Stumping ChatGPT? Scroll down!

Introduction to Chatbots

In the world of artificial intelligence (AI) the superlative “overnight” is not unwarranted. ChatGPT, is an artificial intelligence powered chat robot (chatbot) with 175 billion parameters in its natural language processing model that has become wildly popular, seemingly overnight.  

You have probably heard that ChatGPT is capable of passing business, law, and medical board examinations, has qualified as a level-3 coding engineer at Google (with a $180K starting salary!), diagnosed a rare (1 in 100,000) condition in seconds, can outperform most students in microbiology, and even earn a passable grade in a 12th-grade AP literature test.  

I use ChatGPT every day, and it is quickly surpassing my use of other search engines. It responds to my queries with concise, human-like, detailed and referenced answers, solves elaborate problems, and answers intricate questions, and most importantly, it keeps learning from past inquiries to make the answers align to exactly the information I am looking for.  

Women’s healthcare —historically underfunded— stands to benefit enormously from new AI technologies as long as the current limitations are understood and overcome. ChatGPT is not yet HIPAA compliant and there are currently many concerns about data privacy and integrity. My working assumption is that it was trained on all the digital data ever produced by humans. However, no one really knows what it was trained on. Lastly, ChatGPT occasionally makes up answers and has been known to completely fabricate references and citations (more on that later as I describe my experience stumping ChatGPT!).  

Thinking for US

Academics and teachers have been wringing their hands and shedding bitter tears as they strive to prevent and detect plagiarism that goes beyond simply copying words, to much more complex intellectual phenomena— if you ask the computer to THINK for you, have you plagiarized the essence of humanity?

Can you plagiarize reasoning, planning, problem solving, abstract thinking, the comprehension of complex ideas? Should one believe what an AI tells you? Should you allow a computer to inspire you? Soothe your soul? Write your annual Christmas card? Even as I write this post, I’ve had to revise my anthropomorphic language – did it “write”? Did it “respond” to me? Did it “tell me” something?  

With the advent of the “Information Era” we were forced to develop new philosophies for teaching and learning. Our kids (and us!) no longer waste time memorizing information (ie. the periodic table of the elements), instead we are experts at finding information and knowing how to use it. The advent of the AI Era (Augmented Intelligence Era?) will likely call for a similar restructuring of the way we think about education and learning.

The large publishing houses have all adopted a policy that ChatGPT cannot be an author, because it can’t take responsibility for the words it “wrote” i.e., returned. Current best practice dictates that the author should disclose that ChatGPT was used as a tool in writing. However, that seems odd to me. Why should ChatGPT use be disclosed, any more than using the Dewey Decimal System at the public library, or using Google to search for information? I predict that as chatbot AI becomes normalized, that will fall to the wayside as a silly artifact of the early days of AI. (Natch, ChatGPT was used as a tool for writing this post!)  

The Information

Humans are driven to build great libraries. Often, we name them after politicians, because they are as much political symbols of progress, enlightenment, and cultural heritage as they are bastions of critical thinking. As a democratizing force driving innovation, the betterment of humanity as a whole, and the empowerment of individuals or populace, they are often the target of political battles. I think of ChatGPT (and now, competing large language models, like Bard) much the same way. But then I immediately think of the State sponsored censorship of books and libraries under dictatorships, and draw mental parallels to the book bans currently underway in the USA. I think of the destruction of libraries and thousands of historical and rare archives, such during the modern Iraq War and Syrian Civil wars, and in ancient times, the multiple cycles of destruction of the Library of Alexandria, (Nalanda, Persepolis, Ebla, and Pergamum). Large language models differ from libraries and search engines, they don’t simply present the information to you for your own syntheses and analysis, but they digest it, think it over, and spit it out in an interpreted fashion. Therefore, not only can they be manipulated, but they can also be stumped as we have tested and proven in this experiment to stump ChatGPT.

The internet was initially conceived as a decentralized and open platform that would be resistant to censorship and tampering, the reality is that it is not immune to such actions. I think of how political rivals use “fake news” to manipulate public opinion, how states censor web access (or use it for surveillance). I think of the proliferation of misinformation about vaccines, GMOs, Lyme Disease and so many other health related topics has harmed and killed so many. I think of the way political rivals can instantly flame a conspiracy theory. And I wonder… what will happen when rival businesses or states start targeting AI tools like ChatGPT or other language models?  How will these powerful tools be used to manipulate public discourse, spread misinformation, or exploit biases and weaknesses?  

Stumping ChatGPT 

I have pontificated enough on the abstract. Now back to the immediate! The fertility field is laden with obscure and highly specific information. I decided to see if I could stump ChatGPT. Most of my questions were answered correctly, instantly, and accurately, however, I was quickly able to reach athe accuracy limits, and cause it to generate incorrect answers.  

I asked Chat GPT to describe different cryodevices (bold is my emphasis).   “Cryotop: Cryotop is a device developed by the Japanese scientist Masashige Kuwayama in 1999. It consists of a thin, rectangular strip of polypropylene film with 12 or 16 small wells for holding embryos or oocytes. Cryotop is considered a closed system because the wells are sealed with a thin film of oil or mineral oil, which prevents contamination during vitrification. Cryotop is known for its high efficiency and low volume of vitrification solution.”  

I asked ChatGPT what the earliest reported sperm donor pregnancy was and it gave me the following information, which I was not able to verify as being correct. On PubMed, the publications of Sherman J.K. date back to 1953, but I could not find this specific reference  “Sherman J.K. (1954) Fertility following insemination with frozen human spermatozoa. American Journal of Obstetrics and Gynecology, 68(5), 1295-1297.   It claimed “This paper reports the successful birth of John Randolph, the first baby born from frozen sperm, and provides details about the methods used to freeze and thaw the sperm.”   I asked if that was correct, and it returned:   “As I mentioned in my previous response, the first baby born from frozen sperm was John Randolph, born in January 1954 to a couple in Texas who had been unable to conceive due to the husband’s low sperm count. The husband’s sperm was frozen in liquid nitrogen for a year before being thawed and used for artificial insemination, resulting in the successful birth of their son.”  

Lastly, I asked ChatGPT for information about the regulation of anonymous tissue donors, and it returned the following mix of accurate and inaccurate information (bold is the false):   “Under FDA regulations, anonymous egg donors must be tested for communicable diseases, including HIV, hepatitis B and C, syphilis, and gonorrhea. Donors must also be screened for genetic disorders that could be passed onto offspring, such as cystic fibrosis and sickle cell anemia. In addition, donors must undergo a medical history review and provide information about their lifestyle habits, such as smoking and drug use.”  

ChatGPT Fertility Inspiration

If you are looking for some fertility inspiration, can you turn to Chat GPT for emotional support? Let us know if you feel empowered or inspired when we asked ChatGPT to be the hype man of those suffering from infertility with this final message of support and hope.   “If you’re waiting to get pregnant, the journey can be long and difficult. It’s easy to become overwhelmed with the daily routine of charting cycles, taking medications, and undergoing fertility treatments. But as the days get longer and warmer, it’s important to take a step back and appreciate the beauty of the season and the hope it brings.   Just as the earth goes through cycles of growth and renewal, so do our bodies. With each new cycle, there’s the possibility of new life.   Oocytes, or eggs, undergo a complex process of maturation before they can be fertilized. This process takes several months, and is influenced by a variety of factors, including age, genetics, and lifestyle choices.   Spring is also a time for renewal. Take some time for self-care – pamper yourself, whether it’s with a relaxing bubble bath, a massage, or a day out in nature. Focus on activities that bring you joy and relaxation, and don’t forget to connect with loved ones. Our relationships are important for us to maintain a healthy mind and body.  

You’re not alone on this journey. Reach out to friends, family, or a professional counselor if you’re feeling overwhelmed or just need a friendly face and listening ear.  

As you embrace the season of spring, remember that your journey towards parenthood is also your journey of growth and renewal. Take a deep breath, appreciate the beauty in and around you, and hold on to the hope that your new beginning is just around the corner.”

Were you inspired, terrified, or convinced? Let us know in the comments!

%d bloggers like this: