A recent piece, “Apocryphal Psychotechnologies,” published in Continent may interest AHP readers. Contintent is “a platform for thinking through media. text, image, video, sound and new forms of publishing online are presented as reflections on and challenges to contemporary conditions in politics, media studies, art, film and philosophical thought.” As author Anthony Enns writes,
Apocryphal technologies are particularly interesting for the study of technological imaginaries precisely because they blur the boundaries between the legitimate and the illegitimate or the plausible and the implausible. For instance, it is often difficult to distinguish apocryphal technologies from real technologies because they tend to be based on the same underlying principles and assumptions. The aspirations that inform apocryphal technologies can also inform real technological innovations by serving as a springboard for new ideas or by anticipating the development of new inventions. The combination of fantastic effects and apparent plausibility also makes apocryphal technologies particularly suitable for conspiracy theories, which similarly encourage a belief in the impossible by imposing a veneer of truth and veracity. Unlike imaginary technologies, therefore, apocryphal technologies can promote faith in technological progress as well as fear of technocratic control. The following paper will explore the desires and anxieties that inform apocryphal technologies by examining a series of electronic devices that allegedly influenced (or were influenced by) the mind. While the claims made about these machines were not supported by scientific research, they were all based on a common understanding of the mind as an electronic apparatus that was subject to modification and manipulation, and they reflected a shared desire for a perfect mind-machine interface, which was imagined as a source of either unlimited power or complete powerlessness. At the same time that these psychotechnologies blur the boundaries between the legitimate and the illegitimate or the plausible and the implausible, therefore, they also illustrate the uneasy tension between utopian aspiration and dystopian paranoia—particularly with regard to the future of humanity.
Science journalist Bhahar Gholipour reports on the history of how Helmut Kornhuber and Lüder Deecke’s 1964 bereitschaftspotential research has signified in neuroscience.
The article deftly surveys the eras of interpretation about the results of the study, identifying presumptions that affected decades of seemingly positive replication, and how advancing comprehension of ambient neuronal activity in the brain led to a reframing of the landmark results, creating new directions for inquiry.
Her narrative begins in the late nineteenth century, when researchers explored the brain’s anatomy in an attempt to identify the origins of mental disorders. The studies ultimately proved fruitless, and their failure produced a split in the field. Some psychiatrists sought nonbiological causes, including psychoanalytic ones, for mental disorders. Others doubled down on the biological approach and, as she writes, “increasingly pursued a hodgepodge of theories and projects, many of which, in hindsight, look both ill-considered and incautious.” The split is still evident today.
The history that Harrington relays is a series of pendulum swings. For much of the book, touted breakthroughs disappoint, discredited dogmas give rise to counter-dogmas, treatments are influenced by the financial interests of the pharmaceutical industry, and real harm is done to patients and their loved ones. One thing that becomes apparent is that, when pathogenesis is absent, historical events and cultural shifts have an outsized influence on prevailing views on causes and treatments. By charting our fluctuating beliefs about our own minds, Harrington effectively tells a story about the twentieth century itself.
Abraham Maslow’s theory of motivation, the idea that human needs exist in a hierarchy that people strive to satisfy progressively, is regarded as a fundamental approach to understanding and motivating people at work. It is one of the first and most remembered models encountered by students of management. Despite gaining little support in empirical studies and being criticized for promoting an elitist, individualistic view of management, Maslow’s theory remains popular, underpinned by its widely recognized pyramid form. However, Maslow never created a pyramid to represent the hierarchy of needs. We investigated how it came to be and draw on this analysis to call for a rethink of how Maslow is represented in management studies. We also challenge management educators to reflect critically on what are taken to be the historical foundations of management studies and the forms in which those foundations are taught to students.
It is now over 30 years since Koss first published her work on hidden rape victims. Instead of rehashing whether “1 in 5” is valid and whether women are reliable interpreters of their own experiences, we should be asking why it is so hard for us to hear these experiences and connect them to larger structures of power and domination. The history of “1 in 5” challenges us to critically examine, in the present moment, who has the power to name rape and be believed, under what conditions, and with what consequences.
how intuition became a touchpoint within burgeoning debates around information technology systems in corporations in the 1970s and 1980s, as psychologists, IT designers and executives debated questions that continue to haunt our contemporary moment: How could computer systems, and the vast quantities of data they produce, aid managerial decision-making? What type of work could be automated and what remained the province of human expertise? Which psychological capacities, if any, could be outsourced to machines, and which remain uniquely human capacities? By turning to the past, I interrogate how practical concerns about how to design information systems were inextricably bound up in more theoretical, even existential, concerns about the nature of the human who could make such technology work.
I’ve been arguing for years that the integration of digital media devices and psychological techniques is one of the most underappreciated developments in the history of computing. For more than 50 years, this has been the domain of computer scientists who have approached the brain as a “human processor,” just another a machine to be tinkered with. The work has taken place almost entirely in the domain of computer science, with little input from clinical psychologists, ethicists, or other academic fields interested in the messy details of human social life. Understanding that shortsighted perspective, and how it gave rise to companies like Cambridge Analytica, can help us curtail the weaponziation of social media today.
In the summer of 1954, a bus pulled into Robbers Cave State Park in the mountains of rural Oklahoma. The dozen 11-year-old boys on board, all of them strangers to each other, craned to catch a glimpse through the dusty windows of what for most of them was their first summer camp. For a week they explored the park, swam in a creek, and hiked in and around mountain caves. They didn’t know that a couple of days later, a second group arrived, also believing they had the park to themselves.
Social psychologist Muzafer Sherif and his team, disguised as camp counsellors, watched each group bond and form its own identity. The two groups named themselves the Rattlers and the Eagles, each with flag, anthem, dress code, leaders and followers, as well as shared rules and standards. “They staked out their territory,” Sherif’s research assistant, O.J. Harvey, told me. “Everything was ‘our’ – ‘our hideout’, ‘our creek’.”
The full article can be found (behind a pay-wall) here.
Since the earliest twentieth century, psychologists have been concerned with how technology affects health and well-being. In the 1930s, they weighed the effects of listening to the radio. In the 1960s, they turned their attention to television. And in more recent years, they have expanded their research to video games and cell phone use. Psychologists have always been vocal on questions about the long-term effects of entertainment technology.
However, both the past and present debates suggest that answering questions about the pros and cons of entertainment technology is complicated. Research findings have been mixed and therefore not easily translatable into policy statements, news headlines, or advice for parents. This was true in 1960 and it is true today.
Take, for example, debates regarding televised violence and childhood aggression. Between 1950 and 1970, televisions became a standard presence in American homes. However, not everyone believed they were a welcome addition. Parents, educators, and politicians questioned what they saw as excess violence and sexuality on TV.
In 1969, the Surgeon General’s Office deemed TV violence a public health problem and called on psychologists to provide definitive evidence on its effects. The million-dollar project was modeled on the well-known Surgeon General’s Advisory Committee on health-related risks of tobacco use. It was hoped that evidence from the social and behavioral sciences could similarly close the case on television violence and aggressive behavior.