Emerging from Despair

By Edvard Munch – The Athenaeum: pic, Public Domain, https://commons.wikimedia.org/w/index.php?curid=38018045

I know have been radio silent online for much of the past few months. I have to be honest, I have been struggling with profound feelings of despair at watching the large scale attacks on civil society these past few months, with the knowledge that we have three-and-a-half years left of this. At least.

As a Librarian, watching the civic, institutional and diplomatic damage of the second Trump administration has been personally gutting. The breadth and deapth of the attacks on government institutions, knowledge, research and public service programs. It has been a non-stop shock-and-awe campaign against anyone and anything that benefits the public good.

The cascading chaos of DOGE, the Big Ugly Bill, and the United States reneging on its domestic and international commitments are the most visible actions that the public is seeing. All of that is outrageous enough. But that’s just the tip of the iceberg. What people are not seeing are the countless research projects that are silently being cut off. Some of these projects may find funding from other institutions, or other countries. Some, if not most, will probably just fade into obscurity. We are entering a research dark age.

I’m not speaking purely in hypotheticals, either. One of the most alarming examples are the defunding of incredibly valuable mRNA vaccine research grants. I could list countless others, across a myriad of scientific domains – all of them deserve more visibility advocacy on their own. However, as a Librarian with eye on Trustworthy AI policy, I have to limit my scope.

In particular, I want to highlight research projects that demonstrate tangible promise for implementing Trustworthy AI policies in the future. The bulk of these research project research projects were supported by the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST).

In 2023, the National Science Foundation was funding the National Artificial Intelligence Research Institutes. There were (are) seven research institutes decidated to various themes:

  • Trustworthy AI (TRAILS)
  • Intelligent Agents for Next-Generation Cybersecurity (ACTION)
  • Climate Smart Agriculture and Forestry (AI-CLIMATE)
  • Neural and Cognitive Foundations of Artificial Intelligence (ARNI)
  • AI for Societal Decision Making (AI-SDM)
  • AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes (INVITE, AI4ExceptionalEd)

In 2024, NIST launched Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC) and its NIST AI Risk Management Framework (AI RMF 1.0). As I learned about these technology and policy frameworks, I got really excited. I even had the pleasure to speak about it at IAC24 and DGIQW 2025.

But already, you can see erasure of of some of these research themes from reviewing changes to the NSF’s AI Research Institutes website. In listing its research themes you can see that Trustworthy AI and Climate Science have been de-emphasized. Most grant have been archived indefiniately.

Since then, the Trump Administration has released its AI Action Plan which I will maintain is intentionally vague. It also strategically de-emphasizes and omits the Trustworthy and Sustainability themes of AI research and policy.

From my prospective, this all looks bleak. After some grief, I have to remind myself that we’re not starting from zero:

  • Most of the research and publications have not been taken down (yet). Much of it is being mirrored elsewhere in case
  • Some of the research grants funding is still ongoing, others are finding other sources of funding
  • All of the peole behind these projects and papers are still around
  • While the US regresses in Trustworthy AI policy, the work of the European Commision, States and other regions moves foward
  • This administration is not forever

The feelings of despair are real. They are the product of an intentional campaign. I said Shock and Awe approach and I meant it. DOGE, and everything else along with it, is meant to despirit those who believe in the public good in any way whatsoever. If you feel despair, you are not alone and you are not overreacting. This feeling of isolation and helpeless is the point.

As time goes on, I find my despair shifting to anger. I return to the Internet after a Summer hiatus with a sense of intention. I want to find the people, organizations, and papers that are cointinuing to do good work out there. And by good work I mean work that is making search, AI or any information experience on the Internet more Trustworthy.

Voices like Gary Marcus and Ed Zitron have done great work drawing visibility on the larger problems with the current AI hype bubble. I would like to supplement their work by trying to draw more attention to specific technical mechanisms and legal definitions proposed for making generative AI more trustworthy, secure and reliable than it currently is.

Another person whose work I appreciate is Helen Toner, who brings a cybersecurity perspective to analyzing AI technologies. Work from Simon Willison, Kurt Cagle and Jorge Arango have helped me to get hands on experience with large language models that has provided understanding I would not have gathered otherwise. A lot of amazing people are still doing great work out there. I want to stand on the shoulders of giants to inspire mental models for Trustworthy AI as infrastructure, as policy goals, as a work intended for the public good.

By Badseed – Self-photographed, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=5997590

Don’t listen to the AI intimidation-mongers. You can understand how a Generative AI model works just as you can understand a crystal radio set. This technology is not “above our heads.” It’s weird, but it’s not magic. I’ve menioned how I feel that a lot of the public messaging about generative AI intentionally obfuscates how the technology works. My frustration (and anger) with this intentional obfuscation sparked a desire to write articles and give presentations as understandable as possible.

In the age of generative AI, I am of the mind that previously arcane issues of Information Science, Information Behavior Theory and Information Literacy have never ever been more culturally important or directly relatable. I aim to tie Trustworthy AI research and policy discussions to real-life, human examples with tangible benefits to the public and our daily lives.

Some topics I want to explore, for example:

  • Knowledge graphs, why they’re incredibly important (i.e. keeping generative AI models honest and up-to-date), and who is building knowledge graphs as infrastructure for information retrieval of the future.
  • The tactics and motivations behind the intentional obfuscation of generative AI technologies in the public discource
  • Exploring new technologies trees outside of traditional generative AI stacks, static and dynamic prompting, RAG and model routing (this is currently a gap in my understanding)
  • What a post-platform Internet looks like (Protocols Not Platforms!)
  • Those documenting the damage to insitutions and preparing for a post-MAGA rebuilding.

As I return from time offline, I look forward to connecting with others and taking part in the conversations to come.

As we fight these attacks on information access and the public good, I will leave off on two music-related notes. I will paraphrase Gord Downie, a personal hero of mine, when I say we should not tolerate these attacks on our institutions, collective knowledge and understanding with any patience, tolerance or restraint.

The destruction is overwhelming, but it is not forever. This song by Descartes a Kant, has been keeping me going. There will be creation after destruction. You’ll see.

Let’s find each other. Share solidarity. Let’s organize and pick up the pieces from these fucking vandals.

The State of Trustworthy AI Policy – Part 1 of 2

A photograph of the Seattle Central Library. The photo is distributed via Creative Commons License. More info: https://commons.wikimedia.org/wiki/File:Seattle_Library_01.jpg

With my colleague, Erik Lee, I had the great privilege to speak at the Information Architecture Conference in Seattle (at the beautiful Seattle Public Central Library) in April of last year. The topic of the presentation, titled “Beware of Glorbo: A Case Study and Survey of the Fight Against Misinformation” was about AI Data Poisoning (now also known as Prompt or Context Injection), but there was a section where I summarized the state of AI Data Policy, as I understood it then. People told me that the mental models I provided were helpful for getting bearings on the specific terms surrounding AI policy.

In light of this feedback, I thought it would be good to revisit this talk ahead of an update I’m giving later this year. But first, let’s view that state of AI policy terms in April of 2024:

A diagram showing nebulous shapes haphazardly placed. Each of the shapes has terms such as "Robust AI," "Strong AI," "Trustworthy AI." The shapes are accompanied by question marks. This image is to convey the nebulous understanding of these terms in Spring of 2024.

My deck showed the nebulous state of popular AI policy terms that were being thrown around. The term names are not intuitively descriptive and the relationships between them is unclear, especially when sloppy marketing jargon would obscure their meanings as technical terms of art.

We start by setting definitions. Terms that were conceptually identical have been grouped.

  • Explainable/Transparent AI – AI that can explain the reasoning behind its output
  • Robust AI – AI that is technically robust: (consistent, accurate and secure)
  • Ethical/Responsible AI – AI that is inclusive, non-discriminatory, fair – may even have environmental considerations
  • Trustworthy AI AI that encompasses the above principles: safe, secure, consistent and accountable to enable trust in the AI output

Strong AI – AI that is aware of concepts, its own reasoning and itself as an independent agent

Using these definitions, I drew a diagram to help people visualize the state of these terms.

A structured diagram showing the reationship between terms. Trustworthy AI is at the top of the hierarchy. Three sub-groups are below it: Explainable/Transparent AI, Robust AI, and Ethical/Responsible AI. The term Strong AI is nebulous and disconnected.


In the diagram, I placed Trustworthy AI as a superset concept that includes each of the other AI policy concepts (explainable/transparent AI, robust AI, ethical/responsible AI) within it. Strong AI (now more commonly referred to as Advanced General Intelligence (or AGI) is disconnected since it is only theoretical.

This model is imperfect as these policies often overlap and share goals, definitions and desired outcomes. I found, however, thinking of each of these policies as contributing to the larger goal of Trustworthy AI to be a helpful way of understanding each of these policies and how the relate to each other.

In addition to defining and contextualizing these AI policies to one another, I also profiled the organizations making the most waves in these spaces and what had been published and legislated up to that point.

The heavy hitters that I had found were:

Additionally, I noted some movement in the Executive and Legislative branches of the United States government at that time.

Now, nearly a year later what has changed? A lot, as you can all imagine.

I will speak about this at DGIQ West 2025 in a talk titled “Catching Up with Glorbo: Combatting AI Data Poisoning with RAG Frameworks“. You won’t have to wait until May, as I plan to write about this in Part 2 ahead of the conference. In the meantime, here are some highlights include:

Thank you to everyone who has encouraged me to continue write and speak about this subject. Stay tuned for part two. Please don’t hesitate to reach out to me with helpful feedback (that includes corrections). 🙂 See you soon in Part 2.

On Launching a Blog in 2024

Like a lot of people in tech in the years leading up to, during, and after COVID, my relationship to the Internet and technology changed on a profound level.

Photograph of the author circa 1998. An "awkward" teenager in front of a computer desk stacked with books.
The author at the start of his information science journey, ca. 1998

I spent the greater part of my youth devouring and regurgitating tech and internet hype. I sincerely believed that information technology was the solution to most, if not all, of society’s ills. I was too caught up in the novelty of this new technology to consider the serious downsides. It wasn’t all bad, however. This enthusiasm led me to get my Library and Information Sciences degree.

Ironically, it was the insight gained from my MLIS degree that contributed to my declining enthusiasm for technology. As the implications of disinformation and information illiteracy played out in recent years, I watched my relationship with technology swing from a source of inspiration to a fount of existential dread. In time, outside of what I needed to do for daily work, I withdrew from social media and the Internet almost entirely.

Photograph of the author in his 30's in front of a working Xerox Alto computer from 1973 at the Living Computer Museum in Seattle
The author at the peak of my tech exuberance. Rest in Peace, Living Computer Museum

Others have written with similar experiences. We’ve all heard the reasons: the enshittification of the Web, misinformation, cyberbullying, how generative AI is making the Dead Internet theory more of a reality, Zoom Fatigue. At this rate, why bother with the web anymore? Anything you post is going to be used to train generative AI models, further continuing to crowd out signal with noise.

A garish image of a computer-generated skeleton holding a machine gun in each hand. In clashing fonts and colors text reads "BRING BACK RSS READER'S [sic] AND BLOG'S"
The amazing work of “Admin” from da share z0ne. Replace your entire wardrobe and buy all of their merch

And yet, I’ve been inspired watching some colleagues in my professional network, such as Tracy Forzaglia and Stuart Maxwell, restart websites and blogs. Additionally, I love the work of Molly White (Web 3 is Going Just Great, Follow the Crypto) and her work reminds me the value of having a platform that you control. The old Internet is still there, dammit.

I also had a great conversation with Jorge Arango at IAC 2024. This conversation was partially responsible for Jorge writing an article about why the IA field needs to get out of the AI doldrums. The conversation also helped rekindle my curiosity towards these new technologies.

Yes, the harms are real and they will continue to grow, horrifyingly, in scale. As Jorge reminded me, nay, challenged me, that doesn’t preclude us from getting nerdy with the tools to find out what good they can do. Challenge accepted.

In addition to writing about “AI” technology itself, I plan to discuss developments in policy such as:

  • Transparent or Understandable AI
  • Ethical or Responsible AI
  • Trustworthy AI
  • Robust AI
  • Sustainable AI

I also want to write about topics relating to:

  • Linked Open Data “infrastructure”
  • Information Theory in everyday life
  • Humane design and the ethics of information environments
  • Stuff I just think is neat
An image of Marge Simpson holding a potato saying "I just think they're neat."
Marge Simpson holding a potato and saying “I just think they’re neat.” Be like Marge

I am not a Machine Learning or Generative AI expert. I don’t have a software engineering degree. But I hold a Masters in Library and Information Science from the University of Washington Information School. I am an experienced Taxonomist, Ontologist, and Information Architect, who has had the pleasure to work with semantic technologies in enterprise environments. I hope to use this platform to have professional conversations, learn in public, and also to help other learn in the process.

I’m excited to explore what we can do when we take the means of communications back from centralized platforms. I think the Internet can be “fun” again.

Protocols Not Platforms!
Pods Not Profiles!

Disaffected info nerds of the word, unite! ✊- Sherrard

A recent photograph of the author, standing in a field with a beard, wearing sunglasses, a cap, large headphones and a shirt by dashare.zone reading "IT IS NO LONGER POSSIBLE TO 'LOG OFF'"
The author further on in years. A little more jaded, but still an information nerd