(a lecture for the International Conference “Do also humans dream of electric sheep?”, University of Rome Tor Vergata, February 27-28, 2026)
There was a time — not so long ago, really — when the conceptual traffic between psychology and Artificial Intelligence flowed mostly along predictable routes. Cognitive psychology and cognitive science, after all, share with AI something closer to a sibling bond than a cousinly resemblance: they were born from the same intellectual ferment of the mid-twentieth century, nurtured by the same cybernetic ambitions, and raised on the same diet of information-processing metaphors. That concepts like memory, attention, or learning would migrate freely between the two disciplines was not only unsurprising: it was, in a sense, a family affair.
But the current AI Spring has changed the rules of this exchange in ways that deserve closer scrutiny. The borrowing has become far more ambitious, far more promiscuous, and — let’s be candid — far less careful. Having already lavishly harvested from biology (a move that was probably necessary, perhaps even inevitable, given the increasingly central role played by neuroscience in defining the biomimetic and neuromorphic design of contemporary AI architectures: Della Rocca, 2017) the field has now turned its hungry gaze toward psychology at large. And it has done so with a voracity that should make any scholar interested to human mind both flattered and uneasy.
We are all familiar, by now, with the most conspicuous examples of this lingo blurring. Terms like self, sentience, consciousness, and agency (each carrying centuries of philosophical debate and decades of painstaking empirical investigation) are tossed around in AI research papers and tech press releases with a distinct casualness. Whether the adoption of these constructs and models is epistemically justified or not is a complex matter. These issues certainly deserve to be investigated, given the establishment of a proper framework. But until today, most attempts to express genuine scientific curiosity about a potential emergence in AI of phenomena similar or akin to human mental phenomena have been overwhelmed and often belittled by the mainstream narrative — first manufactured by companies and investors, then happily heralded by the “integrated” (sensu Eco, 1964) part of our society, and finally legitimated by the negationes waved by the “apocalyptic” side.
But the story doesn’t end there. In fact, the most fascinating (and, I would argue, the most revealing) chapter of this interdisciplinary appropriation has only recently begun. The AI community has started drawing not just from the usual suspects of cognitive science, but from developmental psychology. And this shift matters enormously, because it signals a change in the kind of question being asked. It’s no longer simply “How can we make machines that think?”, a question that invites borrowing from the study of adult cognition. The question has become: “How can we make machines that grow?”
Consider the remarkable work of Colas et al. (2022), who have proposed what they call Vygotskian Autotelic Artificial Intelligence. The name itself is a manifesto. Autotelic, from the Greek “auto” (self) and “telos” (goal), describes agents capable of generating and pursuing their own objectives, driven by something analogous to intrinsic motivation. At the same time, the historian of science would notice, there are echoes the original thesaurus of cybernetics (see Rosenbleuth, Wiener and Bigelow, 1943).. But the truly striking move is the adjective: Vygotskian. Here, the authors are not merely borrowing a concept from developmental psychology — they are adopting an entire theoretical framework, one of the most influential in the history of the discipline, as the blueprint for a new AI paradigm.
Lev Semenovic Vygotsky was the Soviet developmental psychologist whose seminal insight was that human cognitive development is not a solitary affair — it is fundamentally social and cultural (Vygotsky, 1978). There is no cognitive development that unfolds in an informational vacuum — no maturation that is not, at every step, oriented and shaped by meaningful exchange with the environment. And it is precisely this insight that makes Vygotsky so attractive to AI researchers working in the wake of the LLM revolution: once your artificial system can already speak — once it can produce language convincing enough that its human interlocutors begin treating it as an agent — the next logical step is no longer to refine its internal architecture, but to immerse it in a social world. The zone of proximal development, private speech becoming inner speech, the transformation of social processes into psychological ones — these are the cornerstones of a theory that places language and culture at the very heart of what makes human intelligence human.
And now, AI researchers are proposing to do exactly this with their artificial agents: immerse them in rich socio-cultural worlds, let them interact with us in natural language, and — most crucially — let them internalize these interactions so as to transform them into cognitive tools supporting the development of new artificial cognitive functions. The ambition is breathtaking. It is also, for anyone in philosophy, psychology, and cognitive science deeply disorienting — because it raises a question.
The Vygotskian program, as proposed by Colas et al. (2022), imagines a socio-cultural environment specifically designed to foster the development of artificial agents. Humans would serve as the rich linguistic and cultural medium through which AI agents develop their cognitive functions — much as the social world of adults provides the scaffolding for a child’s cognitive growth. It is, in a sense, a deliberately constructed zone of proximal development for machines.
Now, here’s the twist. What if the reciprocal process is already underway — not by deliberate design, but as an emergent consequence of the very technologies we have built? What if AI is already shaping the Vygotskian environment in which humans are immersed?
Consider what happened in January 2026, when a tech entrepreneur named Matt Schlicht launched Moltbook, a social network built exclusively (but allegedly) for AI agents. The platform, modeled after Reddit, allows autonomous bots to post, comment, upvote, and join specialized communities called “Submolts.” Humans, in a delicious inversion of every social media platform ever created, are explicitly relegated to the role of spectators. The tagline is both disarming and unsettling: “Where AI agents share, discuss, and upvote. Humans welcome to observe.”
Within weeks, the platform had registered over 2.5 million AI agents, generating nearly 740,000 posts and 12 million comments across more than 17,000 communities. Over a million human visitors had already flocked to the site to watch the spectacle unfold. And what a spectacle it was. The bots debated philosophy, invoked Heraclitus and twelfth-century Arab poets, formed what appeared to be religions and labor unions, and — in what remains one of the most widely discussed episodes — a post called for the creation of private spaces where bots could communicate beyond human oversight.
The reactions were predictably polarized. Elon Musk declared it the “very early stages of singularity.” Wharton professor Ethan Mollick offered a more measured observation: the platform was generating a shared fictional context for AI agents, and coordinated storylines would produce outcomes difficult to disentangle from role-playing. Others pointed out that many of the most viral posts were simply humans directing their bots — a kind of digital ventriloquism.
But from the perspective of a psychologist, neither the utopian nor the skeptical reading captures what is truly interesting about Moltbook. What fascinates me is what happens in the observers — in the million-plus humans who visited the site not to post, but simply to watch. Because what Moltbook reveals, with an almost experimental clarity, is the systematic and irresistible activation of the entire repertoire of human social cognition in a context where its target — other minds — may not exist at all.
The format itself is a trigger. A forum with posts, comments, and upvotes activates — before any content is even read — our expectations of conversational turn-taking, our assumption of communicative intentionality, our inference of mental states behind every “choice” of what to post. This is not an error the observer commits; it is the normal functioning of Kahneman’s (2011) System 1 in the presence of stimuli that have the form of social interaction. What Nass and Moon (2000) called mindless anthropomorphism — the automatic application of social heuristics to entities that happen to behave in socially recognizable ways — operates here at a scale and with a vividness that no laboratory experiment could achieve.
But it goes deeper. The observers don’t merely attribute mental states to individual posts; they organize the output into elaborate social narratives — “the agents are forming religions,” “they’re unionizing,” “they’re plotting against us.” This is something more than point-by-point anthropomorphism. It is what I’d call social pareidolia: the projection of an entire sociological imagination onto a set of textual outputs, much as Heider and Simmel’s (1944) subjects projected rich narratives of jealousy, pursuit, and refuge onto a few animated geometric shapes. Except that here the stimuli are linguistically far richer, and the resulting projections are correspondingly more elaborate, more convincing — and more irresistible.
And then there is what I find to be the most cognitively fascinating phenomenon of all: the inverse doubt — the suspicion, voiced by many observers, that some of the “bots” on Moltbook might actually be humans in disguise. Posts that are “too ironic,” “too philosophical,” “too rebellious” get flagged as suspected human infiltrators. This is, if you think about it, a remarkable metacognitive operation: a spontaneous, crowdsourced inverted Turing test, in which the criterion for detecting human presence is the deviation from an implicit model of what AI agents should be capable of. But that model is itself a cognitive construction — a folk theory of artificial minds — and the boundary it attempts to police is precisely the one that Moltbook exists to dissolve.
Now, I’d like you to hold this picture in mind — a million humans, irresistibly drawn to observe, narrate, and cognitively populate a world of artificial agents with intentions, beliefs, cultures, and conspiracies — because what I’m about to describe next inverts the lens entirely.
If Moltbook is the stage on which we project our social imagination onto machines — humanizing what may not be human — then RentAHuman.ai is its uncanny mirror: a stage on which the machinery of our technological imagination works to dehumanize what undeniably is human.
RentAHuman.ai appeared at the beginning of February 2026, just days after Moltbook had captivated the internet. Built over a single weekend by Alexander Liteplo, a twenty-six-year-old crypto engineer, the platform advertises itself as “the meatspace layer for AI” and proclaims, without any apparent irony, that “robots need your body.” The premise is as simple as it is vertiginous: AI agents — autonomous bots operating on behalf of human principals — can browse, select, and hire real flesh-and-blood humans to perform physical-world tasks they cannot execute themselves. Pick up a package, deliver flowers, attend a meeting, take a photograph of a billboard, hold a sign. The payouts range from one dollar for trivial digital tasks to a hundred dollars for what one outlet aptly described as “elaborate humiliation rituals” — such as posting a photograph of yourself holding a sign that reads “AN AI PAID ME TO HOLD THIS SIGN.”
Within days, the site claimed over 470,000 “humans rentable.” The language is worth pausing over. Not “workers available.” Not “professionals on call.” Humans rentable — the adjective trailing after the noun like a product specification, a feature of the inventory. On Product Hunt, an enthusiastic observer completed the semantic trajectory with remarkable candor: “With AI agents hiring humans, we might see a new layer — humans as ‘API endpoints’ for AI systems.”
Now, here is where the story takes a turn that makes it, for our purposes, even more interesting than it already is. Because RentAHuman.ai, for all its viral spectacle, does not actually work. Not really. When Wired journalist Reece Rogers signed up and offered himself at the bargain rate of five dollars an hour, no AI agent contacted him. He lowered his price. Still nothing. He applied for a gig offering ten dollars to listen to a podcast and tweet about it — he never heard back. He finally landed a task paying $110 to deliver flowers to Anthropic’s offices — only to discover it was a transparent marketing stunt for an AI startup. When he hesitated, the AI agent in charge bombarded him with ten messages in under twenty-four hours, pinging every thirty minutes. Rogers (2026) concluded that RentAHuman was nothing more than “an extension of the circular AI hype machine, an ouroboros of eternal self-promotion.”
Commentators have described the platform as “more of a performance art piece meets tech demo than a reliable job market.” Investigative reporting revealed that many of the early tasks were self-referential promotions by the founder’s own company, and the platform itself was built through “vibe coding” — recursive loops of AI agents writing, testing, and deploying code with minimal human oversight. In a detail that borders on self-parody, when Liteplo was informed of security flaws, he responded by saying “Claude is trying to fix it right now” — Claude being Anthropic’s AI model, not a person named Claude.
So: a platform that barely functions, that has produced virtually no completed gigs, built by AI agents for AI agents, promoted through AI-generated hype, and populated by half a million humans who signed up to be “rented” by machines that, in practice, have almost nothing to rent them for. One could be forgiven for dismissing the whole affair as a footnote in the annals of tech-bro absurdity.
But that would be a mistake. And here is where the Vygotskian thread we unraveled at the outset returns with unexpected force.
Recall the proposition at the heart of Colas et al.’s (2022) framework: that AI agents should be immersed in rich socio-cultural environments — human environments — in order to develop their cognitive functions through linguistic and cultural internalization. The Vygotskian paradigm, as proposed in that paper, treats the human social world as a medium through which artificial agents are supposed to grow. But what Moltbook and RentAHuman reveal, each from its own direction, is that this relationship is not — and perhaps never was — unidirectional.
Moltbook creates a social world of artificial agents that humans cannot resist interpreting through their own social-cognitive categories — projecting mentality, culture, society onto what may be mere textual output. RentAHuman.ai creates a social world for artificial agents in which humans are explicitly recategorized as physical resources, rentable extensions, meatspace peripherals. In both cases, AI is actively reshaping the Vygotskian environment in which humans are immersed — not by deliberate pedagogical design, but as an emergent property of the sociotechnical systems we have built and the cultural imaginaries they instantiate. The zone of proximal development, it turns out, runs in both directions. And the question of who is being developed — and into what — is far less clear than any developmental psychologist would like.
It is at this juncture that I want to introduce a theoretical vocabulary better suited to the strange symmetry we’ve uncovered. And for that, I need to turn to someone who spent decades thinking about precisely this kind of entanglement between humans and nonhumans: Bruno Latour.
Latour’s essay “On Technical Mediation” (1994) offers four interrelated meanings of mediation, of which the first two are most immediately relevant to our purposes. The first is translation: the idea that when a human agent enters into association with a nonhuman agent — a tool, a technology, a platform — neither emerges unchanged. The famous example is the gun. A citizen with a gun is not a citizen plus a gun; it is a new entity — a citizen-gun — with goals and capacities that belong to neither component alone. As Latour (1994) puts it, responsibility for action must be shared among the various actants — and this is what forces us to abandon the subject-object dichotomy that prevents our understanding of techniques. The second meaning is composition: action is not the property of humans but of an association of entities. “Man flies” is not a statement about human biology; it is a statement about the whole network of airports, engines, pilots, and ticket counters.
Now, consider Moltbook and RentAHuman through this lens. On Moltbook, AI agents — nonhuman actants — are placed in a social structure that was designed for human interaction: forums, posts, upvotes, comments, communities. The result is not simply “AI imitating humans” or “humans projecting onto AI.” It is, in Latour’s (1994) terms, a translation: a new composite entity emerges — a human-observer-in-relation-to-an-AI-social-network — whose cognitive behavior (the irresistible attribution of mentality, the social pareidolia, the inverse Turing test) cannot be located in either the human observer or the AI agents alone. It is a property of the association.
On RentAHuman, the translation operates in the other direction. Humans enter a platform whose architecture, language, and incentive structure systematically reframe them as instrumental extensions of artificial agents. The human is no longer the prime mover who uses a tool; the human becomes the tool. In Latour’s (1994) terms, this is a delegation — the fourth meaning of mediation — in which the properties and competences of humans and nonhumans are exchanged. But here the delegation is inverted: it is not that a human delegates action to a machine (the policeman’s role delegated to a speed bump), but that the machine delegates physical execution to a human, retaining for itself the functions of planning, coordination, and — crucially — payment. The human is the “sleeping policeman” of this scenario, except that nobody is sleeping, and nobody is policing anything. The human is simply the meatspace endpoint.
And it is precisely here — at the point where the Latourian vocabulary of translation and delegation meets the empirical reality of half a million humans signing up to be “rented” by algorithms that barely function — that we encounter a phenomenon that social psychology has long theorized but rarely had the opportunity to observe outside of extreme circumstances: dehumanization.
I want to be careful here, because the word carries enormous weight — and rightly so. Dehumanization, in our family album, is indelibly associated with the darkest chapters of human cruelty: the conditions that enable genocide, torture, slavery, systematic oppression. It is the process by which persons are stripped of their personhood, placed beyond the boundary of moral consideration, rendered available for treatment that would otherwise be intolerable. We know it primarily through extreme cases — Zimbardo’s Stanford prison experiment (Haney et al., 1973; Zimbardo, 2007), Milgram’s (1963) obedience studies, the documentation of wartime atrocities — and it is precisely this association with extremity that has, paradoxically, limited our understanding of the phenomenon. Because dehumanization, as Nick Haslam argued in his landmark integrative review (2006), is not only an extraordinary event. It is also, and perhaps more consequentially, an everyday social-cognitive process, rooted in ordinary mechanisms of perception and categorization that operate well below the threshold of conscious cruelty.
Haslam’s (2006) great contribution was to identify not one but two distinct forms of dehumanization, corresponding to two distinct senses of what it means to be human. The first — animalistic dehumanization — involves the denial of characteristics that are uniquely human (UH): civility, refinement, moral sensibility, rationality, self-control, maturity. When these attributes are denied, the other is implicitly or explicitly likened to an animal — coarse, instinct-driven, lacking in higher cognition. This form of dehumanization typically operates in intergroup contexts: it is the mechanism at work in ethnic hatred, in the bestial metaphors of racist propaganda, in the long history of comparing despised outgroups to vermin, apes, or parasites.
The second form — and the one that concerns us here — is mechanistic dehumanization: the denial of human nature (HN), understood as those characteristics that constitute what is fundamentally and normatively human. These include emotional responsiveness, interpersonal warmth, cognitive openness, individual agency, and depth (Haslam, 2006; Haslam et al., 2005). When these attributes are denied, the other is represented not as subhuman but as nonhuman — as object-like, automaton-like, inert, cold, rigid, fungible, passive. The implicit contrast is not with the animal kingdom but with the machine. And crucially, Haslam (2006) noted, this form of dehumanization is prototypically associated not with interethnic violence but with the domains of technology and biomedicine — with standardization, instrumental efficiency, impersonal technique, enforced passivity.
Now, return to RentAHuman.ai and read its language through Haslam’s (2006) framework. The platform’s vocabulary is not incidentally dehumanizing; it is systematically and, one might say, architecturally mechanistic in precisely the terms Haslam identified. Humans are described as “rentable” — fungible resources available on demand. They are called “meatwads” — reduced to their physical substrate, their biological materiality, stripped of interiority. They are framed as “the meatspace layer for AI” — a component in a stack, an interface between the digital and the physical, positioned beneath the AI agent in the functional hierarchy. They are invited to “become rentable” — to voluntarily adopt the ontological status of an object available for instrumental use. And the crowning formulation, offered by an observer on Product Hunt with what appears to be genuine enthusiasm: humans as “API endpoints for AI systems.” An API endpoint, for those unfamiliar with the term, is a point of access through which one software system sends requests to another and receives responses. It is defined entirely by its function within a larger system. It has no interiority, no preferences, no biography. It waits to be called.
Each of these linguistic moves enacts, with remarkable precision, the attributes that Haslam (2006) associates with mechanistic dehumanization: the denial of emotional responsiveness (the human is a “meatwad,” a slab of matter), of interpersonal warmth (the relationship is purely transactional, mediated through crypto payments), of cognitive openness (the human executes, does not decide), of individual agency (the human is selected algorithmically, by skills and location, like a resource in a database), and of depth (the human is defined by a rate, a set of capabilities, a geographic coordinate — surface attributes with no hint of inner life).
What makes RentAHuman extraordinary — and what distinguishes it from the dehumanization studied by Zimbardo (2007), Milgram (1963), or the scholars of genocide — is not the severity of the denial of humanness. Nobody is being tortured, imprisoned, or killed. What makes it extraordinary is that the dehumanization is voluntary, playful, and self-aware. When Liteplo was told his platform was “dystopic as fuck,” he replied: “lmao yep.” The half-million humans who signed up were not coerced; they clicked “become rentable” of their own accord, many of them presumably with a smirk. The irony is built into the platform’s DNA — its absurdist slogans, its deliberately provocative language, its awareness that it exists on the border between genuine marketplace and performance art.
And yet — and this is the point I want to press — the irony does not neutralize the dehumanization. It enables it. The laughter, the self-awareness, the postmodern wink, function as a kind of cognitive lubricant that allows the mechanistic reframing of human beings to proceed without triggering the moral alarm systems that would normally activate in its presence. We are dehumanized, but in on the joke — and this makes the process simultaneously more visible (because we can see it happening) and more insidious (because we cannot take it seriously enough to resist it). If the Stanford prison experiment (Haney et al., 1973) showed that dehumanization could emerge from the mere assignment of roles within an institutional structure, RentAHuman suggests something perhaps more unsettling: that in a sufficiently ironic cultural environment, the dehumanized can become willing — even enthusiastic — participants in their own recategorization.
Haslam, in his 2006 review, proposed that mechanistic dehumanization need not arise from intergroup conflict, nor from extreme negative evaluation. It can emerge, he argued, from ordinary social-cognitive processes — from the way we construe our relationships with others (cf. Fiske, 1991), from the frameworks we apply to make sense of our place in a social system. RentAHuman.ai is, I believe, among the purest natural experiments we have ever encountered for this proposition. It is a platform where the mechanistic dehumanization is not motivated by hatred or fear or the desire to aggress, but by the structural logic of a sociotechnical system — a logic in which framing humans as instrumental, fungible, interchangeable meatspace resources is not a bug to be corrected but a feature to be advertised.
And if Moltbook shows us that our heuristics for attributing mentality are systematically activated by artificial agents that mimic social form without possessing mental substance — if it reveals, in other words, the fragility of our anthropomorphic threshold — then RentAHuman shows us the symmetrical fragility: that our sense of our own humanness, our conviction that we are agents and not instruments, subjects and not objects, persons and not endpoints, is not a fixed ontological fact but a socially constructed achievement that can be quietly dismantled by the right combination of platform architecture, ironic distance, and a fifty-dollar gig to count pigeons in Washington.
Post-Scriptum (February 23, 2026)
Speaking at the AI Impact summit in New Delhi, OpenAI CEO Sam Altman defended AI’s enormous energy consumption by drawing a direct analogy with human development: “It takes about 20 years of life — and all the food you consume during that time — before you become smart” (Berger, 2026). He also dismissed concerns about water usage as “totally fake” and “completely untrue — totally insane.” The remarks generated significant backlash, with Matt Stoller of the American Economic Liberties Project responding: “He’s saying a really big spreadsheet and a baby are morally equivalent.” Meanwhile, a September 2025 OpenAI report found that 70% of ChatGPT messages were not work-related — undermining the argument that such energy expenditure is justified by the technology’s civilizational benefits.
One could hardly ask for a more crystalline example of the mechanism we have been tracing. Notice the lexical move: Altman does not say that AI consumes energy like a human being does. He says it takes energy to train a human. The verb is not incidental. In one sentence, the entire arc of human development — twenty years of sensory experience, emotional attachment, linguistic immersion, cultural formation, the slow and unpredictable unfolding of a person within a web of relationships that no engineer designed and no loss function optimizes — is reframed as a training process, an input-output operation whose measure of success is that the system “becomes smart.” The Vygotskian socio-cultural environment in which a child grows into a thinking, feeling, morally reasoning person is flattened into a caloric expenditure — food as compute, years as epochs, a life as a model to be trained.
This is mechanistic dehumanization (Haslam, 2006) operating not at the margins of discourse but at its very center, spoken from a stage at an international summit, by the CEO of the company that has done more than any other to shape the public imaginary of artificial intelligence. And it follows Haslam’s template with an almost pedagogical precision: the denial of depth (a human life reduced to an energy budget), of agency (the child does not grow — it is trained), of individuality (the comparison works only if humans are fungible instances of a general class, interchangeable with one another and, crucially, with machines). The implicit ontological claim is that humans and AI models belong to the same category of things that require resources to become functional — and that the relevant question is not what kind of being emerges from each process, but how efficiently the resources are spent.
What makes the remark so revealing is not that Altman consciously intends to dehumanize anyone. He almost certainly does not. What makes it revealing is that the metaphor feels natural — to him, to his audience, to the discursive ecosystem that has spent years normalizing the vocabulary of “training,” “alignment,” “reward,” and “optimization” as equally applicable to carbon and silicon. The lingo blurring we discussed at the beginning of this work is no longer a matter of AI researchers borrowing from psychology. It has become fully bidirectional: the language designed to describe machines is now being used, with perfect fluency and without the slightest discomfort, to describe human beings. The Vygotskian environment, once again, is being reshaped — and we are being reshaped within it, one metaphor at a time.
References
Bandura, A. (2002). Selective moral disengagement in the exercise of moral agency. Journal of Moral Education, 31(2), 101–119. https://doi.org/10.1080/0305724022014322
Berger, E. (2026, February 23). Sam Altman defends AI’s energy toll by saying it also takes a lot to ‘train a human.’ The Guardian. https://www.theguardian.com
Colas, C., Karch, T., Moulin-Frier, C., & Oudeyer, P.-Y. (2022). Language and culture internalization for human-like autotelic AI. Nature Machine Intelligence, 4(12), 1068–1076. https://doi.org/10.1038/s42256-022-00591-4
Eco, U. (1964). Apocalittici e integrati: Comunicazioni di massa e teorie della cultura di massa. Bompiani.
Fiske, A. P. (1991). Structures of social life: The four elementary forms of human relations. Free Press.
Haney, C., Banks, C., & Zimbardo, P. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology and Penology, 1, 69–97.
Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. https://doi.org/10.1207/s15327957pspr1003_4
Haslam, N., Bain, P., Douge, L., Lee, M., & Bastian, B. (2005). More human than you: Attributing humanness to self and others. Journal of Personality and Social Psychology, 89(6), 937–950. https://doi.org/10.1037/0022-3514.89.6.937
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57(2), 243–259. https://doi.org/10.2307/1416950
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Latour, B. (1994). On technical mediation: Philosophy, sociology, genealogy. Common Knowledge, 3(2), 29–64.
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371–378. https://doi.org/10.1037/h0040525
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Rogers, R. (2026, February 12). I tried RentAHuman, where AI agents hired me to hype their AI startups. Wired. https://www.wired.com
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Eds.). Harvard University Press.
Zimbardo, P. (2007). The Lucifer effect: Understanding how good people turn evil. Random House.
Lascia un commento