In a piece over on Technology’s Stories – a project run by the Society for the History of Technology (SHOT) – Kira Lussier explores the move from intuition as a human capacity to intuition as a feature of computers. In “From the Intuitive Human to the Intuitive Computer” Lussier examines
how intuition became a touchpoint within burgeoning debates around information technology systems in corporations in the 1970s and 1980s, as psychologists, IT designers and executives debated questions that continue to haunt our contemporary moment: How could computer systems, and the vast quantities of data they produce, aid managerial decision-making? What type of work could be automated and what remained the province of human expertise? Which psychological capacities, if any, could be outsourced to machines, and which remain uniquely human capacities? By turning to the past, I interrogate how practical concerns about how to design information systems were inextricably bound up in more theoretical, even existential, concerns about the nature of the human who could make such technology work.
Read the full piece online here.
If you have been following the recent Cambridge Analytical scandal, Luke Stark‘s recent Slate piece situating psychology within the long history of computer science leading up to the controversy is sure to be of interest. As Stark observes,
I’ve been arguing for years that the integration of digital media devices and psychological techniques is one of the most underappreciated developments in the history of computing. For more than 50 years, this has been the domain of computer scientists who have approached the brain as a “human processor,” just another a machine to be tinkered with. The work has taken place almost entirely in the domain of computer science, with little input from clinical psychologists, ethicists, or other academic fields interested in the messy details of human social life. Understanding that shortsighted perspective, and how it gave rise to companies like Cambridge Analytica, can help us curtail the weaponziation of social media today.
Read the full piece online here.
The European Research Consortium for Informatics and Mathematics‘ publication ERCIM NEWS put out a special issue on ‘scientific data sharing and re-use.’ In it Christine Borgman (out of UCLA’s department of Information Studies) touches in brief on some of the topics covered in her new volume Big Data, Little Data, No Data: Scholarship in the Networked World (2015, MIT Press).
In her book, Borgman locates data as only meaningful within infrastructures or ecologies of knowledge, and discusses the management and exploitation of data as particular kinds of investments in the future of scholarship. Her take on the history of big data and the growing enthusiasm for data sharing, which she asserts often obscures the challenges and complexities of data stewardship, is relevant to historians of the social sciences. An excerpt:
Data practices are local, varying from field to field, individual to individual, and country to country. Studying data is a means to observe how rapidly the landscape of scholarly work in the sciences, social sciences, and the humanities is changing. Inside the black box of data is a plethora of research, technology, and policy issues. Data are best understood as representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Rarely do they stand alone, separable from software, protocols, lab and field conditions, and other context. The lack of agreement on what constitutes data underlies the difficulties in sharing, releasing, or reusing research data. Continue reading Issues in Open Scholarship: ‘If Data Sharing is the Answer, What is the Question?’ →