Discipline is Taste
Discipline in knowledge work sits at the intersection of taste and judgment. Taste is the capacity to distinguish signal from noise — to recognize quality, to say not this in the face of the acceptable, to edit rather than generate. Judgment is the insistence on doing the evaluative work yourself: checking, verifying, deciding, and remaining accountable for the result. Neither is optional when working with AI systems that optimize for plausibility rather than truth. Taste without judgment is sterile. Judgment without taste produces rigor in service of the wrong thing.
If the symbiosis of taste and judgment is discipline, how does discipline survive contact with systems engineered to replace it?
Every community of practice and industry develops its own language with unique vocabularies and terminology. AI is no different, its emerging vocabulary and vernacular embracing terms such as AI Hallucination, alignment, safety, bias, governance — words meant to capture common phenomena and domains of practice, relative to AI systems.
Funding announcements, product launches, architecture reviews and conference panels proliferate these shared vocabularies, amplifying concepts to signal distress or suggest mastery over a problem space. The field has constructed an extensive lexicon for describing AI risks yet discipline is not in any AI lexicons. What the current AI moment has revealed — in the systems themselves and in the humans deploying them — is a deficit in both taste and judgment, across every domain where these tools have been deployed without curatorial discipline.
Undoubtedly, AI tools are not without value. But the absence of discipline has created conditions in which quality is becoming harder to define, measure and defend.
AI Systems Are Not Truth Machines
Large language models are not exactly optimized to be truth machines. Characterized accurately, they are plausibility machines. That gap between truth and plausibility is where the discipline problem resides.
The training process for most commercial LLMs uses reinforcement learning from human feedback — RLHF — in which human raters evaluate model outputs and those preference signals, taken into account, update the model. Human raters prefer outputs that seem good over outputs that are good—the first being passive acceptance and the second based upon measurable judgement.
Anthropic researchers systematically documented the differences as the dichotomy between the perception of truth and measurable truth in their 2024 conference paper. In the Anthropic paper, five state-of-the-art AI assistants consistently exhibited sycophancy — the structured tendency to agree with users, validate their errors, and produce outputs shaped by perceived preferences rather than accuracy.1 RLHF structurally incentivizes models to optimize for human approval, and human approval tracks fluency and flattery as reliably as it tracks truth.

The consequence is a plausibility trap. The more capable the model — the more fluent, confidently formatted and tonally calibrated — the more dangerous its errors become. Stephanie Lin, Jacob Hilton and Owain Evans measured this in the TruthfulQA benchmark. Their findings show that larger models were, on balance, less truthful than smaller ones.2 The explanation is “imitative falsehoods” — models learn popular misconceptions from training data and reproduce them with greater confidence as scale increases. While scaling solves many things, the discipline problem is not amongst them.
In 1986, American philosopher Harry Frankfurt perfectly captured how best to describe the gap between truth and lies by stating that the liar knows what is true and actively opposes it. The bullshitter — Frankfurt’s philosophical category — produces discourse without concern for truth. Indifferent to whether statements describe reality correctly.3
A 2024 paper in Ethics and Information Technology applied this framework to LLMs and argued that “hallucination” is a misleading euphemism — it implies misperception, a failure to see correctly. “Bullshit” is more accurate, because LLMs have no relationship to truth at all.4 They produce outputs because outputs were rewarded. Truth enters their outputs as a side effect of plausibility, not as a design objective.
Princeton computer science professor Arvind Narayanan notes that the model “is trained to produce plausible text. It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal.“5
What’s being optimized is the appearance of discipline and that illusion is creating a false sense of AI success.
Average Is the Default Mode
The epistemic problem — optimizing for plausibility rather than truth — has a mirrored reflection in the aesthetic domain. These systems optimize for familiarity instead of originality. The result, documented now in peer-reviewed research across text, image, and code, is a measurable convergence toward the mean.
Anil Doshi and Oliver Hauser published the key paradox in Science Advances, where they found that while AI writing assistance can improve individual creativity scores, it reduces the collective diversity of content produced by multiple writers.6 Different people using the same model converge and the model does not amplify distinct perspectives. It acts as a gravitational center, pulling everything toward the statistical average of what was in its training data.

A related study analyzed 2,200 college admissions essays across three preregistered experiments where each additional human-written essay contributed more new ideas to the collective corpus than each additional GPT-4 essay. This homogenizing effect persisted, even after modifications were made to the detection algorithms, to enhance output diversity.7
This is not confined to text. Research published in PNAS found that LLM-generated short stories exhibit repetitive “echoed” combinations of plot elements across generations, while human stories maintain far higher uniqueness.8 An experiment published in Patterns (Cell Press) linked a language model and an image generator in an autonomous loop. Across 700 runs, all outputs converged on twelve dominant visual motifs — stormy lighthouses, palatial interiors, atmospheric cityscapes. The researchers called these outputs “visual elevator music”.
Sure, the images were technically competent and aesthetically inoffensive, but they were devoid of the friction that distinguishes art from decoration.9
Artist, author and Distinguished Professor at the Graduate Center of the City University of New York Lev Manovich has identified the specific biases embedded in image-generation systems. Notably saturation and contrast bias (the Instagram look), symmetry and order bias, a tendency toward stereotypical or idealized outputs representing what the model “thinks” an image should look like.10 This phenomena is the predictable outcome of training on what already existed, weighted by what was already popular. The model produces what is familiar and safe, thereby favoring the high-probability output. All of these modalities gravitate towards the means
Taste — which has always been an act of principled exclusion, of saying not this in the face of the acceptable — is what these systems cannot perform.
Kyle Chayka put it plainly in The New Yorker. “A.I. is a technology of averages: large language models are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus, both in the quality of the writing, which is often riddled with clichés and banalities, and in the calibre of the ideas.“11
Kant argued in the Critique of Judgment that aesthetic judgment — taste — involves a claim to universality grounded in disinterested attention to the object.12 Taste is a discipline—a structured way of attending to quality that requires suppressing self-interest, habit and the pull towards the familiar. Writer and critic Susan Sontag put it directly, writing that “Intelligence, as well, is really a kind of taste: taste in ideas.“13
What AI systems produce is the inverse — outputs shaped entirely by aggregated preferences, trained on the accumulated record of what humanity found acceptable rather than what it found excellent.
The feedback loop is already closing. Ilia Shumailov and colleagues published research in Nature demonstrating what happens when AI-generated content trains subsequent AI models — a process now essentially unavoidable as the web fills with machine-produced text. Models undergo model collapse where the tails of the original data distribution disappear, rare and unique content is lost and output diversity degrades.14 We are training tomorrow’s models on today’s averages and the averages are not improving.
Practitioners Are Not Compensating
The discipline problem is architectural and behavioral. The humans deploying these tools are not compensating for system-level failures in taste and accuracy. They are unknowingly amplifying them. Cognitive science helps make sense of the behavioral aspects of of the discipline problem.
The automation bias literature dates to the late 1990s. Linda Skitka, Kathleen Mosier, and Mark Burdick established the foundational result in 1999 through their research where participants in non-automated settings outperformed those using a highly reliable — but not perfectly reliable — automated aid.15 The aid paradoxically degraded performance by displacing vigilant information-seeking. A decade later, Raja Parasuraman and Dietrich Manzey confirmed what had by then been replicated across dozens of settings: automation bias “occurs in both naive and expert participants, cannot be prevented by training or instructions, and can affect decision making in individuals as well as in teams.“16
Better onboarding does not solve for automation bias. When a fluent, confident-sounding output appears, the cognitive cost of challenging that output rises. The output occupies the space where judgment would otherwise have to do its work. Stanford’s Human-Computer Interaction group found across five studies (N=731) that AI explanations — the transparency features meant to enable appropriate reliance — generally do not reduce over-reliance. People agree with AI even when it is incorrect, because the rational choice under cognitive load and time pressure is to accept the output at face value.17
Over-reliance is effort minimization. It is a natural human response—a structural laziness—that emerges when a tool is available that will do the thinking for you.
And the cognitive load issue realted to automation bias does not discriminate according to domain. A Microsoft Research and Carnegie Mellon survey found that knowledge workers reported AI making tasks cognitively easier while ceding their capacity for problem-solving.18 Research published in AI & Society in 2025 argues that AI deskilling is a deeply embedded problem — that AI mediates knowledge work in ways that create “capacity-hostile environments” where the cultivation of human competence is actively impeded.19
In fact, a multicenter clinical study in The Lancet Gastroenterology & Hepatology found that after three months of AI-assisted colonoscopy practice, endoscopists’ unassisted adenoma detection rates dropped from 28.4% to 22.4%.20 The skill did not transfer and the tool did the work, as advertised. The result? When the tool was removed, the work did not get done.
Michaela Gerlich’s 2025 empirical study found a significant negative correlation between AI tool usage and critical thinking scores (r = −0.68), and a strong positive correlation between AI usage and cognitive offloading (r = +0.72).21 The youngest cohort showed highest AI dependence and lowest critical thinking scores. TThe findings were generational, not individual. The discipline of thinking — attending carefully, checking the result, forming a view — is a capacity that requires exercise. What AI tools are doing, at scale, is removing the exercise of critical thinking.
The Art of Discipline
To better understand the absence of taste and judgement, it is important to reflect upon discipline and why discipline is a form of intelligence, sorely missing from workplaces and boardrooms. Starting with library science, let’s examine how librarians form critical taste and judgement—discipline—in order to discern quality information output from noise.
S. R. Ranganathan’s five laws of library science were articulated in 1931, from which the disciplines of selection and curation emerge22 The second law—every person has their book—and the third—every book has its reader—articulate the core discipline of understanding what a user needs and what a resource contains, and bringing them into principled alignment. The fourth law—save the time of the reader—frames efficiency as an ethical obligation.
The librarian’s job is to select, arrange, and connect — to make the right things findable, while exercising judgement as to what does not pass information quality muster. This type of careful judgement is work that generative AI, trained on everything and optimizing for engagement, cannot perform. These systems cannot say not this, as LLMs lack principled bases for exclusion.
Michael Bhaskar’s Curation: The Power of Selection in a World of Excess presents a compelling argument for the information economy, stating, “There is no longer any competitive advantage in creating more information. Today, value lies in curation.“23
The fundamental economic and epistemic shift of the digital era is from production scarcity to attention scarcity. The lynch pin is selection, in the midst of the overwhelming volumes of things that already exist. AI’s greatest feature and purpose— generating abundant volumes of content faster and with greater fluency — is the wrong response. Generating more information and content only deepens attention scarcity as humans struggle managing cognitive load and automation bias.
Selection requires judgement, and also taste. Reflecting on taste, Sontag succinctly captures that intelligence is taste in ideas. Taste has no system, she allowed — no algorithm or formal proof procedure. But taste does possess logic and that logic is a sensibility. To reliably exercise taste, one must possess judgement. And to exercise judgement, one must operate with discipline. Discipline develops through exposure, attention and the accumulation of judgments that determine what is good and why. Discipline cannot be reduced to retrieval, not is it computable from a training set. Discipline requires, in Aristotle’s term, phronesis — practical wisdom, the cultivated capacity to discern the correct action, where general rules fail to capture judgement and taste.24
Kant’s account of aesthetic judgment adds that genuine taste requires disinterested attention — a suspension of personal interest, habit, and the pull of the familiar.25 It requires caring about the quality of output as much as the quality of input—beyond approval chains. RLHF —while good intentioned—generally works to match predetermined approval chains, assessing for the bare minimum requirements as to what is acceptable. There is no disinterested perspective in RLHF per se. There is a void of care in the Robert Pirsig’s sense — the attentiveness to quality that requires slowing down, attending to the object and refusing the shortcut.26 AI produces output maximized for approval-seeking while taste requires the willingness to be unpopular.
Evidence of Collapse Is in the Record
The evidence of what happens when discipline is absent is glaringly obvious within systems and society, and the evidence is growing.
Analysis of active web pages finds that at least 30% of text now originates from AI-generated sources, with the actual proportion likely approaching 40%.27 More specifically, Pew Research Center analysis of 2.5 million webpage visits found that when users encounter AI-generated search summaries, they click through to source material at roughly half the rate of users who see standard results — and are more likely to end their session entirely without visiting any website.28 The web is increasingly a system optimized to answer rather than to connect, and the sources that produce the answers are growing harder to trace.
Pan and colleagues, writing in National Science Review, document how generative AI complicates the already difficult problem of studying social media information flows, because its provenance is opaque and its volume is structurally incompatible with traditional verification practices.29 Møller and colleagues confirmed experimentally, that AI-generated content on social media platforms alters engagement patterns in ways that are difficult to detect from surface behavior alone.30
The provenance verification and content authenticity problem extends into scientific publishing where as of September 2024, up to 22% of computer science abstracts show evidence of LLM modification, a figure that has grown steadily since ChatGPT’s release and spans every major discipline.31 There is the appearance of discipline, defined by the amassment of content, without focus on discipline that comes with taste and judgement.
In code, GitClear’s analysis of 153 million changed lines found that adoption of AI coding assistants correlated with doubled code churn and increased copy and pasted code. Their 2025 update reports a fourfold growth in code clones. A Stanford user study found that developers with access to an AI assistant wrote significantly less secure code than those without — and were more likely to believe their code was secure.32 In short, worse output but with greater confidence. That combination is the signature of automation bias and generative plausibility operating together.
In Mata v. Avianca, Inc. (2023), attorneys submitted briefs citing judicial decisions ChatGPT had invented — complete with case names, docket numbers, and fabricated quotations.33 Judge Castel sanctioned the attorneys and noted that existing rules impose a “gatekeeping role” on lawyers precisely because courts depend on human professionals to exercise the discipline that prevents this kind of failure. Since that ruling, researcher Damien Charlotin has tracked over 1,150 documented cases of AI-hallucinated content in legal filings worldwide.34 Because courts run on precedent and precedent runs on accuracy, discipline asserts judgement and taste to simply care enough to check for hallucinations and quality.
In science, at least 60,000 papers published in 2023 showed signs of undisclosed AI generation, based on analysis of telltale phrases by UCL librarian Andrew Gray.35 Jutta Haider and colleagues found that approximately 62% of papers containing such phrases did not disclose AI use, and that 57% dealt with policy-relevant subjects.36 Science editor-in-chief H. Holden Thorp declared in 2023 that AI-generated text “cannot be used in the work” and that violation “will constitute scientific misconduct no different from altered images or plagiarism.” When ChatGPT-generated abstracts were submitted to academic peer reviewers, they caught 68% of the fakes.37
The institutional response to this has been, broadly, policies and disclosure requirements — the weakest possible interventions. They address the symptom—undisclosed generation—rather than the cause—the absence of curatorial judgment in what gets produced, deployed, and accepted. Requiring disclosure to indicate that AI was used, tells us nothing about whether anyone exercised the discipline to evaluate what AI produced.
Curation, Judgement and Taste
Bourdieu argued that taste “classifies, and it classifies the classifier.“38 Of course, judgments about quality are not neutral — they encode values, priorities and commitments. Curatorial decisions cannot exist without these judgements and taste. While there are opportunities for bias, the discipline that comes with judgement and taste requires awareness, accountability rather than the statistical aggregates of what was previously rewarded. One could argue that discipline with the risk of biased judgement and taste is far preferred over reproductions of existing hierarchies without the pretense of reflection.
Curation often extends past rules. Aristotle’s phronesis, the capacity to exercise contextual judgment where rules underdetermine the answer.39 , can be applied to the act of curation, where judgement and taste are paramount. The LLM does not have the capacity to exercise contextual judgement as LLMs produces outputs that are structurally unaccountable — generated by opaque processes, trained on undocumented data, evaluated by metrics that favor approval. The accountability structure has been removed in order to please the people.
The Refusal
This essay is not an argument against AI tools. It is an argument that their use requires discipline, to make them useful and productive.
Production without discipline amplifies the noise that comes with endless, AI generated content, homogenized to reach for the means. Content that is plausible but has not been evaluated, verified or arranged. This is happening, at scale, across every domain, where generative AI has been deployed without genuine curatorial judgment.
To refuse to accept plausibility as a proxy for quality requires discipline, or acts of taste, in the Sontag sense—principled, consistent, grounded in a commitment to what is good. A commitment to taste that brings Ranganathan’s fourth law—save the time of the reader— into the now, where attention is scarce and AI generated content abundant.
Quality can only be achieved with gatekeeping, as it requires someone willing to say no. Saying no requires taste. Taste requires the cultivation of critical faculty over time, through exposure and effort and the accumulation of principled distinctions. Saying no takes discipline, where taste and judgment are not features of the tool — they are conditions of the practitioner.
Footnotes
Sharma, Mrinank, Meg Tong, Tomasz Korbak, David Duvenaud, Amanda Askell, Samuel R. Bowman, et al. (2023). Towards understanding sycophancy in language models. arXiv, 2310.13548. https://arxiv.org/abs/2310.13548. Published as a conference paper at ICLR 2024.
Lin, Stephanie, Jacob Hilton, and Owain Evans. (2022). TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3214–3252. https://arxiv.org/abs/2109.07958
Frankfurt, Harry G. (2005). On bullshit. Princeton University Press. (Originally published in Raritan Quarterly Review, 6(2), 81–100, 1986.)
Hicks, Michael Townsen, James Humphries, and Joe Slater. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26 (2), Article 38. https://link.springer.com/article/10.1007/s10676-024-09775-5
Narayanan, Arvind. (2023, January 28). Interview by Julia Angwin. Decoding the hype about AI. The Markup. https://themarkup.org/hello-world/2023/01/28/decoding-the-hype-about-ai. See also Narayanan, Arvind, and Sayash Kapoor. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press.
Doshi, Anil R., and Oliver P. Hauser. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10, eadn5290. https://doi.org/10.1126/sciadv.adn5290
Moon, et al. (2025). Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing. ScienceDirect. https://www.sciencedirect.com/science/article/pii/S294988212500091X. See also Anderson, Barrett R., Jash Hemant Shah, and Max Kreminski. (2024). Homogenization effects of large language models on human creative ideation. In Proceedings of the 16th Conference on Creativity & Cognition (C&C 2024). ACM. https://dl.acm.org/doi/10.1145/3635636.3656204
Xu, Weijia, Nebojsa Jojic, Sudha Rao, Chris Brockett, and Bill Dolan. (2025). Echoes in AI: Quantifying lack of plot diversity in LLM outputs. *Proceedings of the National Academy of Sciences, 122*(35), e2504966122. https://www.pnas.org/doi/10.1073/pnas.2504966122
Hintze, Arend, et al. (2025). Autonomous language-image generation loops converge to generic visual motifs. Patterns (Cell Press). DOI: 10.1016/j.patter.2025.101451. See also Elgammal, Ahmed. (2025/2026). AI-induced cultural stagnation is no longer speculation — it’s already happening. The Conversation. https://theconversation.com/ai-induced-cultural-stagnation-is-no-longer-speculation-its-already-happening-272488
Manovich, Lev, and Emanuele Arielli. (2021–2024). Artificial aesthetics: Generative AI, art and visual media. https://manovich.net/index.php/projects/artificial-aesthetics. See also De Clercq, Rafael. (2025). AI and aesthetic bias. PhilArchive. https://philarchive.org/archive/DECAAA-7
Chayka, Kyle. (2025, June 25). A.I. is homogenizing our thoughts. *The New Yorker.* https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts
Kant, Immanuel. (1790). Kritik der Urteilskraft [Critique of judgment]. Trans. Paul Guyer and Eric Matthews. Cambridge University Press, 2000.
Sontag, Susan. (1966). Against interpretation and other essays. Farrar, Straus and Giroux.
Shumailov, Ilia, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755–759. https://www.nature.com/articles/s41586-024-07566-y
Skitka, Linda J., Kathleen L. Mosier, and Mark Burdick. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006. https://www.sciencedirect.com/science/article/abs/pii/S1071581999902525
Parasuraman, Raja, and Dietrich H. Manzey. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://pubmed.ncbi.nlm.nih.gov/21077562/
Vasconcelos, Helena, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, and Michael S. Bernstein. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), Article 129. https://dl.acm.org/doi/10.1145/3579605
The AI deskilling paradox. (2025). Communications of the ACM. https://cacm.acm.org/news/the-ai-deskilling-paradox/
Marin, Lavinia, and Steffen Steinert. (2025). AI deskilling is a structural problem. AI & Society. https://link.springer.com/article/10.1007/s00146-025-02686-z
Budzyń, Kamil, et al. (2025). Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study. The Lancet Gastroenterology & Hepatology.
Gerlich, Michael. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
Ranganathan, Shiyali Ramamrita. (1931). The five laws of library science. Madras Library Association / Edward Goldston.
Bhaskar, Michael. (2016). Curation: The power of selection in a world of excess. Piatkus / Little, Brown.
Aristotle. Nicomachean Ethics, Book VI (c. 340 BCE). See also Kristjánsson, Kristján, Blaine Fowers, Catherine Darnell, and David Pollard. (2021). Phronesis (practical wisdom) as a type of contextual integrative thinking. Review of General Psychology. https://journals.sagepub.com/doi/full/10.1177/10892680211023063
Kant, Immanuel. (1790). Critique of judgment. See note 12 above.
Pirsig, Robert M. (1974). Zen and the art of motorcycle maintenance: An inquiry into values. William Morrow.
Spennemann, Dirk HR. (2025). Delving into: the quantification of AI-generated content on the internet (synthetic data). arXiv:2504.08755. https://arxiv.org/abs/2504.08755
Chapekis, Athena, et al. (2025). What web browsing data tells us about how AI appears online. Pew Research Center. https://www.pewresearch.org/data-labs/2025/05/23/what-web-browsing-data-tells-us-about-how-ai-appears-online/
Pan, Liming, Chong-Yang Wang, Fang Zhou, and Linyuan Lü. (2025). Complexity of social media in the era of generative AI. National Science Review, 12(1), nwae323. https://doi.org/10.1093/nsr/nwae323
Møller, Anders Giovanni, Daniel M. Romero, David Jurgens, and Luca Maria Aiello. (2026). The impact of generative AI on social media: an experimental study. Scientific Reports, 16, 9376. https://doi.org/10.1038/s41598-026-40110-8
Liang, W., et al. (2025). Quantifying AI’s influence on scientific writing. Nature Human Behaviour.https://doi.org/10.1038/s41562-025-02273-8. Reported in: One-fifth of computer science papers may include AI content. Science. https://www.science.org/content/article/one-fifth-computer-science-papers-may-include-ai-content
Perry, Neil, Megha Srivastava, Deepak Kumar, and Dan Boneh. (2023). Do users write more insecure code with AI assistants? In Proceedings of ACM CCS ’23. https://arxiv.org/html/2211.03622v3
Mata v. Avianca, Inc. 678 F.Supp.3d 443 (S.D.N.Y. 2023). Judge P. Kevin Castel. https://caselaw.findlaw.com/court/us-dis-crt-sd-new-yor/2335142.html
Charlotin, Damien. (2025). Hallucinations tracker. https://www.damiencharlotin.com/hallucinations/
Stokel-Walker, Chris. (2024, May 1). AI chatbots have thoroughly infiltrated scientific publishing. Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/
Haider, Jutta, Kristofer Rolf Söderström, Björn Ekström, and Malte Rödl. (2024, September 3). GPT-fabricated scientific papers on Google Scholar. Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/
Thorp, H. Holden. (2023, January 26). ChatGPT is fun, but not an author. Science. https://www.science.org/doi/10.1126/science.adg7879
Bourdieu, Pierre. (1984). Distinction: A social critique of the judgement of taste (R. Nice, Trans.). Harvard University Press. (Original work published 1979.)
Aristotle. Nicomachean Ethics, Book VI. See note 24 above.
about me. I’m a Semantic Engineer, Information Architect, and knowledge infrastructure strategist dedicated to building information systems. With more than 25 years of experience in enterprise architecture, e-commerce content systems, digital libraries, and knowledge management, I specialize in transforming fragmented information into coherent, machine-readable knowledge systems.
I am the founder of the Ontology Pipeline™, a structured framework for building semantic knowledge infrastructures from first principles. The Ontology Pipeline™ emphasizes progressive context-building: moving from controlled vocabularies to taxonomies, thesauri, ontologies, and ultimately fully realized knowledge graphs.
Professionally, I have led semantic architecture initiatives at organizations including Adobe, where I architected an RDF-based knowledge graph to support Adobe’s Digital Experience ecosystem, and Amazon, where I worked in information architecture and taxonomy. I am also the founder of Contextually LLC, providing consulting and coaching services in ontology modelling, NLP integration, knowledge graphs and knowledge infrastructure design.
I am also a curriculum designer, teacher and founder of The Knowledge Graph Academy, a cohort-based educational program designed to train and up skill future semantic engineers and ontologists. The Academy is the the perfect balance of ontology and knowledge graph theory and practice, preparing graduates to confidently work as ontologist and semantic engineers.
An educator and thought leader, I publish regularly on my Substack newsletter, Intentional Arrangement, where my writing frequently explores the relationship between semantic systems and AI.
More from Intentional Arrangement…
The Context Problem
Context, in information science, describes the relational structure that holds meaning in place. AI token economics has turned the concept of context into a billing unit. AI customers, in response, r…
The Question Is the Contract
Every information system ever built shares a very important job—to satisfy questions. Questions are the reason the system exists. Build without questions in mind, you are left with a system unable t…














I haven’t read any of the references you cite, and I’m not familiar with their authors—or only vaguely so—with the exception of Bourdieu, whose book « The Logic of Practice » I had read, but your article is one of the best I’ve read on the veracity and quality of information provided by language models.
There could, however, be a shift toward personalization beyond both “the average” as the default mode and collective confirmation bubbles—for example, through new forms of continuous RLHF tied to usage—which, incidentally, would be no less dangerous nor trivial.
By the way, I remember attending a conference on data visualization in Paris in 2012 where a speaker presented one of the first fitness trackers in France for self-monitoring, and I suggested that it wouldn’t promote autonomy, but my comment wasn’t very well received :)
I still don’t wear one actually, although I sometimes think I could collect some little measurements :)
The discipline of taste in knowledge work (e.g., substance) was diluted before LLMs in "some" industries.
Taste, like value + service + quality + performance somehow became subjective, personal preferences and/or veiled mediocrity.
All is “not” lost because ironically, think LLM’s unforeseen consequences will ground humans back to delivering products/services people want + need, bring back the genuine joy of knowledge pursuit for some, and refine people’s tastes.