A new piece in The New Yorker takes a look at Anne Harrington’s new book Mind Fixers (covered previously on AHP here and here). As Jerome Groopman writes,
Her narrative begins in the late nineteenth century, when researchers explored the brain’s anatomy in an attempt to identify the origins of mental disorders. The studies ultimately proved fruitless, and their failure produced a split in the field. Some psychiatrists sought nonbiological causes, including psychoanalytic ones, for mental disorders. Others doubled down on the biological approach and, as she writes, “increasingly pursued a hodgepodge of theories and projects, many of which, in hindsight, look both ill-considered and incautious.” The split is still evident today.
The history that Harrington relays is a series of pendulum swings. For much of the book, touted breakthroughs disappoint, discredited dogmas give rise to counter-dogmas, treatments are influenced by the financial interests of the pharmaceutical industry, and real harm is done to patients and their loved ones. One thing that becomes apparent is that, when pathogenesis is absent, historical events and cultural shifts have an outsized influence on prevailing views on causes and treatments. By charting our fluctuating beliefs about our own minds, Harrington effectively tells a story about the twentieth century itself.
The full piece can be read online here.
AHP readers may be interested in a recent article investigating the history of Abraham Maslow’s famous pyramid of needs, as well as a recent Quartz piece that delves further into the ubiquity of Maslow’s pyramid.
“Who Built Maslow’s Pyramid? A History of the Creation of Management Studies’ Most Famous Symbol and Its Implications for Management Education,” Todd Bridgman, Stephen Cummings and John Ballard. Abstract:
Abraham Maslow’s theory of motivation, the idea that human needs exist in a hierarchy that people strive to satisfy progressively, is regarded as a fundamental approach to understanding and motivating people at work. It is one of the first and most remembered models encountered by students of management. Despite gaining little support in empirical studies and being criticized for promoting an elitist, individualistic view of management, Maslow’s theory remains popular, underpinned by its widely recognized pyramid form. However, Maslow never created a pyramid to represent the hierarchy of needs. We investigated how it came to be and draw on this analysis to call for a rethink of how Maslow is represented in management studies. We also challenge management educators to reflect critically on what are taken to be the historical foundations of management studies and the forms in which those foundations are taught to students.
New over at Behavioral Scientist, as part of a special issue on the intersection of behavioral science and public policy, is a piece by Alexandra Rutherford on the origins and import of the “1 in 5” sexual assault statistic. This history is also explored more fully in a recent article-length piece in History of the Human Sciences.
As Rutherford notes,
It is now over 30 years since Koss first published her work on hidden rape victims. Instead of rehashing whether “1 in 5” is valid and whether women are reliable interpreters of their own experiences, we should be asking why it is so hard for us to hear these experiences and connect them to larger structures of power and domination. The history of “1 in 5” challenges us to critically examine, in the present moment, who has the power to name rape and be believed, under what conditions, and with what consequences.
Read the full piece here.
In a piece over on Technology’s Stories – a project run by the Society for the History of Technology (SHOT) – Kira Lussier explores the move from intuition as a human capacity to intuition as a feature of computers. In “From the Intuitive Human to the Intuitive Computer” Lussier examines
how intuition became a touchpoint within burgeoning debates around information technology systems in corporations in the 1970s and 1980s, as psychologists, IT designers and executives debated questions that continue to haunt our contemporary moment: How could computer systems, and the vast quantities of data they produce, aid managerial decision-making? What type of work could be automated and what remained the province of human expertise? Which psychological capacities, if any, could be outsourced to machines, and which remain uniquely human capacities? By turning to the past, I interrogate how practical concerns about how to design information systems were inextricably bound up in more theoretical, even existential, concerns about the nature of the human who could make such technology work.
Read the full piece online here.
If you have been following the recent Cambridge Analytical scandal, Luke Stark‘s recent Slate piece situating psychology within the long history of computer science leading up to the controversy is sure to be of interest. As Stark observes,
I’ve been arguing for years that the integration of digital media devices and psychological techniques is one of the most underappreciated developments in the history of computing. For more than 50 years, this has been the domain of computer scientists who have approached the brain as a “human processor,” just another a machine to be tinkered with. The work has taken place almost entirely in the domain of computer science, with little input from clinical psychologists, ethicists, or other academic fields interested in the messy details of human social life. Understanding that shortsighted perspective, and how it gave rise to companies like Cambridge Analytica, can help us curtail the weaponziation of social media today.
Read the full piece online here.