At the House of Ethics we decided, as of now, to add to each piece of content we create, our in-house “100% GPT-free” Label.
For us, it is important to take the necessary time to research, to craft, to create, to reflect using our own brains. And respect the variety of styles and thoughts of our contributors.
This article is split in 2 PARTS: the first Part presents three different texts, styles and perspectives by three different authors; and in Part 2, we played “generative AI”.
We mixed and mashed our three texts together; demonstrating what ChatGPT, Bing, LaMDA, LLaMa… do with scraped data (text, images, video) at a planetary scale.
PART 1 is “3-in-1” point-of-views.
PART 2 is an “ALL-in-ONE” artificially constructed ANSWER!
Large language models mash & mix, slice & dice data from different sources and “glue” snippets of text, video, audio again together according to probability and likeliness (aesthetics), not meaning!
They produce, generate, (not create), NEW content. An uncertain collage of data, words, that, if you are lucky, mean something, at their best are plausible, but mostly are completely false. Like our mixed version in PART 2.
Consequently such models are not reliable for impactful decision-making purposes. Or ethical activities. The veracity and validity of outputs need to be checked, all times, especially if you have no knowledge on the subject.
Thus they become potential superspreaders of misinformation.
Balancing Promise and Caution: Navigating the Challenges of generative AI in Medicine
by Ahmed HEMEDAN - Doctoral Researcher - Bioinformatics
Artificial intelligence is revolutionising the world, and its latest creation, generative AI, is no exception. With the ability to produce human-like text and responses, the possibilities are endless1. However, this new frontier also brings with it a host of concerns and challenges, particularly in the field of medicine2. In this article, we will explore the impact of generative AI on medicine, and the temptations that come with its deployment.
The promise of generative AI in medicine
Generative AI has the potential to transform the field of biomedicine in countless ways. From assisting physicians in forming differential diagnoses to answering patient questions, this technology has the power to revolutionise the healthcare industry3. It can analyse vast amounts of big data, automate repetitive tasks, and improve accuracy, ultimately democratising research and bringing about faster clinical implementation of basic science4–6.
However, there are also significant challenges that come with the deployment of generative AI in medicine7. One of the biggest concerns is the potential for misinformation. Generative AI can have blind spots, be overly confident, and contain biases and prejudices. This is why it is essential to approach this technology with caution, reflection, and responsibility8,9. For instance, ChatGPT generated a misleading explanation about “crushed porcelain added to breast milk supporting the infant digestive system,” which could pose a health risk if used as patient education10.
The temptations of technocracy in the deployment of generative AI in medicine
The deployment of generative AI in medicine holds great promise, but it also requires a measured and responsible approach. The temptation of technocracy refers to the danger of relying solely on technology and science to solve societal problems, without fully considering the potential consequences. This temptation is comprised of three key elements:
The attitude that technology drives society while law and ethics hinder progress. This perspective often sees innovation as inherently good and virtuous, disregarding potential adverse consequences.
The idea that something should be done just because it can. This focus on creating the next paradigm-shifting technology, rather than rooting out bias or ensuring that innovation meets the needs of broader communities, can lead to unintended consequences and harm.
The portrayal of technological failures and societal harm as unintended consequences, absolving designers of their products’ harms.
The new Dr. Google
The new Dr. Google has brought about concerns that AI technology like ChatGPT’s responses may lead to an “AI’s Jurassic Park Moment,” where the indiscriminate confidence of its responses can pose potential danger to patients.
As ChatGPT becomes more widespread in providing medical information, it is crucial to educate patients on how to distinguish between accurate and potentially dangerous inaccuracies.
Scientists have even expressed worries about ChatGPT’s ability to generate convincing fake research-paper abstracts, emphasising the need for caution when evaluating scientific communications. Policies should be put in place to prevent the use of AI-generated texts, which can pose serious risks to patient health8,9,11.
Embracing the promise, mitigating the harms
Generative AI holds immense promise for medicine, but it is essential to confront the challenges and potential harms that come with its deployment. It is crucial to move forward, but a measured approach is needed to ensure things are done correctly.
The deployment of generative AI in medicine must be approached with caution, reflection, and responsibility, to ensure its immense promise is realised while mitigating its potential harms.
The technology remains in its early stages of development, and its output should not be considered the same as advice from clinical experts. Further research is needed to understand its output when used in response to medical questions.
10. Dileep George [@dileeplearning]. Baby got a tummy ache Tryin this is no mistake rhyme is from me, advice is from chatGPT. 😇 https://t.co/ZpzRpctcb0. Twitter https://twitter.com/dileeplearning/status/1598959545229115392 (2022).
"Nothing is lost, nothing is gained, everything is transformed" (A. Lavoisier)
by Katja RAUSCH - Founder @House of Ethics
Undoubtedly, the impact of generative AI is planetary: by scale and speed.
The ultra-speed adoption is more than a classic impact. It acts like a phenomenon.
A systemic phenomenon.
One could even argue that a planetary techno-putsch just happened. It seems like everybody has been taken by surprise. A real coup, with the world under siège by GenAI.
Even Mira Murati, CTO, OpenAI clains to be surprised by the outcome in a recent interview.
Competitors react dazzled, Google, Meta… “We did not launch Bard for fear of “reputational risk“.
For ethics too it is a Première. To deal with an emerging technology so quickly operative, at this scale and speed. Clearly, GenAI outpaced everything, especially regulations and ethics.
Right now Gen AI sits on ETHICAL PERMAFROST
There are three major reasons why generative AI sits on ethical PERMAFROST:
– because of dataethics. GPT models are based on industrial-scale intellectual property theft and deep privacy breaches.
– because of the inherent features of transformer models and the uncertainty factor: user reports on how the systems like ChatGPT or Bing are continuously going rogue, and turn into a superspreader of misinformation, and a fountain of aggressivity.
– because a simple ratio between level and sense of responsibility is not respected.
At such a scale and speed, a technology that is not robust nor transparent as declared by CEO Sam Altman, the LEVEL of RESPONSIBILITY MUST be at the HIGHEST. However we observe that the SENSE of RESPONSIBILITY is at the LOWEST.
There is a complete lack of duty-of-care. In 2016, Microsoft had to take the TwitterBot Tay offline within less than 24 hours, because of racist, homophobic, misogynistic and hate speech. Back then, Microsoft had a SENSE of responsibility. Now with the new “answer machine” version of BING”, Microsoft’s level of responsibility should be considerably higher at a planetary scale. However we see that the SENSE of responsibility demonstrated by Microsoft (and OpenAI) is at its lowest.
GenAI’s Challenges are proportionate to its phenomenon. They move beyond the product, the technology, the industry (ad industry). They become/are systemic.
From an ethical perspective, it is paramount to now address GenAI in its entirety. To go beyond the product-centric approach, and view it as a process. We need a holistic, systemic approach to best deal with its far-flung challenges.
How to address the phenomenon GenAI?
We need to widen the scope and look at the entire “supply chain” of generative AI.
- We need to consider the upstream actors, data brokers, product engineers, businesses, and downstream engineers and end users.
- We need to map a “Chain of accountability and transparency” all along the process.
- We need to develop ethical frameworks in engineering, consider data governance, risk management with business practices and effective user protection.
In this regard, ethics is not enough. We need regulations. Even Mira Murati, CTO OpenAI, insisted on that aspect in her Time interview
Upcoming regulations such as the EU AI ACT and the US AI Bill of Rights are needed to frame this emerging and integrative technology. The business model needs to be tamed on ethical and regulatory levels.
Ethical challenges for humans
It all goes back to the creator and her/his creation. Towards Human-Machine interaction.
With generative AI we observe a Reversed Pygmalion effect, which is the more dangerous. The creation also creates the creator. The creating creator turns into the created creator.
Undeniably, with ethics, the question of the ethos is endemic. How does GenAI as an emerging technology impact, influence, shape us humans? What are the limits? How much is too much? Where are the opportunities?
But mainly, how can we protect people from chronically overestimating technology and underestimating themselves?
Instead of building trust in machines, shouldn’t we start building trust in humans to be able to view technology at its given value?
We need ethics! We need regulation! Ethics for people and regulation for generative AI systems. To live responsibly and respectfully, and thrive on responsible innovation.
Will GenAI take BigTech down?
Will GenAI become the norm? Probably. But in a transformed, updated and augmented version.
1) Generative AI will move from General Purpose AI to fine-tuned industry-specific purpose AI.
Not only will it be much easier to regulate, but it will become much more operational. Take on tasks in which it will excel.
Education, health, law, communication, will create industry-specific applications fed on industry-specific data.
The real disruption will start once specifically purposed generative modules will be integrated into organizational processes.
2) Will GenAI take BigTech down?
Will a new cohort of leaders emerge? Will Gen AI leave the ad industry and integrate business on an organizational level (besides the general purpose). This might be the return for ERP pioneers like SAP, ORACLE, and … MICROSOFT specialized in business processes, firm software, workflow management. (the “search engine” business accounts for 10% so for Microsoft; cloud computing is their real business as organizational firmware). Once coupled with business backbones and proprietary datawarehousess, GenAI modules could turn into trusted and powerful copilots running on proprietary pooled data.
3) New workflows, new skills and a NEW ETHICS will emerge.
New skills like “engineering prompting” are already taught. But ethics too needs to be fit for emerging technologies.
That’s why, we at the House of Ethics, with our contributor Daniele PROVERBIO have developed a novel concept SWARM ETHICS.
Swarm ethics is about emerging ethics (versus hard-coded ethics) for people (not systems) in changing, interactive environments in our digital age.
In May 2023, we will present our novel concept for the first time at the Illinois Institute of Technology in Chicago. Currently, we are working with an international specialist on designing a practical tool box for companies to deploy Swarm Ethics in corporations, at each level of responsibility.
Like the 18th century French chemist Antoine Lavoisier said :
“Rien ne se perd, rien ne se crée, tout se transforme.”
Generative AI: the dusk of the “original internet” or its comeback?
by Daniele PROVERBIO - Doctoral Researcher - System Biomedicine
The current hype on generative AI is quite well deserved. The tech leap was so great that things will never be the same again. As usual, little will evolve as we now predict, and there are still many limitations that slow down the adoption process. But a significant step was taken nonetheless.
A great deal is associated to the algorithm: “transformers” – a type of deep learning architecture – are without questions the current state-of-the-art.
They had already made marvels in diverse applications, where they drastically overwhelmed previous deep learning methods. A topic example was AlphaFold , that improved the prediction of protein structures almost fourfold with respect to previous solutions. Transformers are here – and here to stay, until the next big thing.
Unfortunately, transformers are expensive: to train, deploy and run them, you need a freaking amount of data, computing power, energy and brilliant minds. You need great resources and competences.
Then, the question is: can open-source initiatives, foundations, and single nerds willing to make internet the “agora” of humankind afford those costs?
Or will generative AI be a unique asset of Big Techs, because the scale gap is too enormous for competitors? Without disruptive initiatives like “social” distributed computing and training, the answer may be on the Big Tech side.
If this is the case, the Internet may slide towards a new model, predominantly dominated by a few giant corporates (rings a bell?) with the resources to maintain expensive genAI features, and with such big entry barriers to avert competition – until a new disruption.
In this case, the ethos of the “original internet” (internet as a place for democracy and equality to thrive, sharing software and resources) is likely doomed.
The very few remaining foundations will have the hardest time competing on genAI. Think of LibreOffice: the moment Microsoft implements a genAI on Excel, wouldn’t its free counterpart suffer a tremendous blow? In this case, genAI would cut even the potential of competing.
However, an alternative could occur: a knockback by users. A search for human-made content, curated, of high-quality and perhaps following different business models. Something like current search engines that are alternative to Google – but perhaps on a larger scale, if genAI is perceived to run too fast and too far.
In this case, would ethical choices be key drivers to shape an alternative Internet – like what many tried to pursue during the ‘90s?
Or, as a third possibility, could ethical perceptions themselves be modified by applications of genAI and the profit behind, until something we do not believe “ethical” now becomes the standard?
And thus, a genAI oligopoly is going to be regarded as herald of the old ethos, for the good of people and populations?
This case is probably more farfetched, as technological changes are much faster than changes in human mindsets, but still intriguing, in case new stakeholders manage to enter the debate.
After so many questions, someone’s nose may turn up. Too many “may”, too much blurring, too few clear-cut statements. But it is unavoidable, when talking about scenarios and possibilities. And anyway, isn’t this freedom of uncertainty humans’ unique value when facing modern algorithms?
 Jumper, J., Evans, R., Pritzel, A. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2
 Lin, Tianyang, et al. “A survey of transformers.” AI Open (2022).
 Dehouche, Nassim. “Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).” Ethics in Science and Environmental Politics 21 (2021): 17-23.
PART 2 : Mashed-up Text
Balancing a lost promise, all is transformed! In the dusk.
Artificial intelligence is revolutionising the world, and a planetary techno-putch just happened, according to Mira Murati, CTO, OpenAI. Little will evolve as we now predict.
However, this new frontier also brings in the field of medicine industrial-scale intellectual property theft and deep privacy breaches. A great deal is associated to the algorithm: “transformers”.
Generative AI has the potential to transform a field with a technology that is not robust nor transparent as declared by CEO Sam Altman. The LEVEL of RESPONSIBILITY MUST be at the HIGHEST but there is the temptation of technocracy.
The danger of relying solely on technology and science to solve societal problems is like a Reversed Pygmalion effect. In this case, the ethos of the “original internet”.
Could ethical perceptions themselves be modified by applications of genAI? It is crucial to move forward, but a measured approach is needed to ensure things are done It is crucial to move forward, but a measured approach is needed to ensure things are done It is crucial to move forward, but a measured approach is needed to ensure things are done It is crucial to move forward, but a measured approach is needed to ensure things are done…
Swarm Ethics, a novel concept for emerging technologies (the only non transformed and original content!).