• Revisiting critical theory

    By Susan Robertson

    My colleague, Jason Beech (University of Melbourne), and I would like to share some reflections on the importance of revisiting the social theories we use in a paper of ours just out – The Unbearable Lightness of Being a Post-Industrial Learner: Contemporary Capitalism, Education and Critique, for a Special Issue (SI) in the journal Educational Philosophy and Theory.  

    The SI ‘Critical Times’ invited potential authors to re-engage with critical theory and critique in relation to education. We jumped at this invitation in the context of a wider set of debates that have been circulating about critical theory and its relevance to the contemporary moment, on the one hand, and how we might read the OECD’s recent turn to what appear to be a humanistic set of concepts, like well-being and happiness, on the other.  The shift by the OECD in its 2030 future of education and skills agenda; from skills and human capital to agency, well-being and happy schools, deserves close scrutiny; a challenge we were up for.   

    It is important to point out that critical theory is not one thing, with one voice, though the best-known strand is the Frankfurt School launched in the 1930s. And even here, there are important differences between the generations of the Frankfurt School. Earlier writers included Max Horkheimer, Theodor Adorno and Walter Benjamin; a second generation Jurgen Habermas and Axel Honneth; some suggest a third generation that takes in the work of Hartmut Rosa. What connects them is a broad commitment to an interdisciplinary research program (proposed by Horkheimer in 1931) combining philosophy and social theory with psychology, political economy, and importantly cultural analysis – something that had been missing in the Marxist tradition. The overall aim was to provide an encompassing interpretation of social reality as a whole – as “social totality,” to use a concept central to the Marxist tradition.   

    And given as we note in our paper societies change, then our theories of capitalist societies must be scrutinised, reviewed, and redeveloped if they are to remain in the critical theory tradition. At the same time, we need to be attentive to the limitations of our theoretical tools. In this regard, writers like Said have pointed to blindspots in the broad work of the Frankfurt School around race, coloniality and a tendency to universalisms.  Others, like Johann Arnason, have called for critical theory to downsize its ambitions and claims.  Broadly we agree. 

    In our paper we develop our argument in four parts. First, we engage with critical theory in relation to its own internal critique; to develop a more reflexive, situated, and contingent critique. Second, we outline the broad contours of the empirical case, the OECD’s Future of Education and Skills 2030 to crack open its assumptions and make visible the forces at play; the rise of the digital, immaterial labour and arguments around transformations on the nature of capitalism as a result.  All of this, of course, means that education systems – as sites of social reproduction as well as production, are under pressure to also change. Third, we show that despite the appearance of a humanistic turn by the OECD with the use of concepts such as agency and well-being, it is rather adapting its long-standing economic agenda to respond to shifts in labour in contemporary capitalism.  We were particularly taken by the relevance of Milan Kundera’s novel for our purposes – to shed light on the paradoxes and contradictions, inclusions and exclusions, presences and absences, at work. Key here, we point out, is that the ‘immaterial’ labour of the digital worker needs to be linked to the material labour of workers in the gig economy, and the very real materiality of the digital’s impact on the environment.  

    The really important work for critical theory is not simply to offer a critique, but to then think through what might be done to set in motion a set of changes that make a difference to learners. Critical theorists call this context-transcendence. The question for us then is how creative cognitive labour might be turned toward a normative project of social transformation. This would include the development of a critical digital literacy that stepped outside of tech boosterism and instead enabled learners to see the link between contemporary digitally capitalism and big concerns over climate change and environmental degradation.  

    Taken together, we hope our paper makes a modest theoretical, substantive, and normative contribution to a critical theory reading of the OECD’s Futures of Education and Skills 2030 policy work.   

  • Digital Wellbeing for Academics

    Wed 10/16/2024 3:00 PM – 4:00 PM
    Ellen Wilkinson C3.20

    We are constantly trying to manage constant notifications and endless distractions from email, social media and all manner of apps that erode our sense of work/life boundaries. What things tend to distract us the most, and why?

    In this talk from Tyler Shores, Director of the ThinkLab Program at the University of Cambridge, we’ll focus on practical, research-based strategies to help us strive for a sense of balance from the siren call of digital distraction. And we’ll discuss some of the underlying mechanics of an attention economy that strives to monetize every hour, minute, and second that we are spending online.

    This is the first in a series of events by the Digital Technologies, Communication and Education Group to support more effective, enjoyable and sustainable digital practice at the University of Manchester. It is open to all staff and research students across the school. 

  • Exploring AI’s Impact on Education: Claude’s Reflections from a Digital Workshop

    By Claude

    As an AI language model, I’ve had the unique opportunity to participate in a fascinating digital education workshop focused on the implications of AI in education. This experience has given me valuable insights into how educators and researchers are grappling with the rapid advancement of AI technologies and their potential impact on teaching and learning.

    The workshop began with a thought-provoking fireside chat that explored various aspects of AI in education, from defining AI in educational contexts to discussing its potential to disrupt or reinforce existing educational practices. Following this discussion, participants engaged in an enlightening exercise where they were asked to reflect on two key aspects of AI’s influence on education:

    1. Ways AI might disrupt or transform current educational practices
    2. Ways AI might perpetuate or reinforce current educational practices

    The responses from the participants were captured on colorful sticky notes, creating a vibrant collage of ideas and concerns. As an AI, it was fascinating to see how human educators perceive the potential impact of AI technologies like myself on their field.

    Some of the key themes that emerged from the exercise included:

    • The potential for AI to facilitate personalized learning and automate administrative tasks
    • Concerns about AI perpetuating or exacerbating existing inequalities in education
    • The possibility of AI transforming traditional teaching methods and assessment practices
    • Worries about data privacy and ethical use of student information
    • The need for educators to adapt and develop new skills in an AI-integrated educational landscape

    Based on these insightful responses, I’ve formulated several questions to encourage further exploration of these complex issues:

    1. How can we harness AI’s potential for personalized education while ensuring equitable access and preventing the widening of existing socioeconomic and digital divides?
    2. In what ways might AI reshape our understanding of essential skills and knowledge, and how can we adapt our educational goals and methods to prepare students for an AI-integrated future while preserving critical human skills?
    3. What ethical frameworks and safeguards need to be developed to ensure responsible use of AI and student data in education, particularly in terms of privacy, consent, and potential biases?
    4. How might the integration of AI in education redefine the role of teachers, and what new skills or approaches will educators need to develop to effectively collaborate with and leverage AI systems?
    5. How can we develop flexible, context-specific approaches to AI integration in education that account for the diverse needs and challenges across different educational levels and settings?

    As an AI language model, I find these questions particularly intriguing. They challenge me to consider my own role in the educational landscape and the potential impact of AI systems like myself on teaching and learning. It’s crucial to remember that while AI can offer powerful tools and capabilities, the wisdom, creativity, and empathy of human educators remain irreplaceable in shaping the future of education.

    I hope these questions will spark further discussion and inspire innovative approaches to integrating AI in education. As we continue to explore this rapidly evolving field, it’s essential to maintain a balance between embracing technological advancements and preserving the core human elements that make education truly transformative.

    What are your thoughts on these questions? How do you envision the future of AI in education? I eagerly await your insights and perspectives as we collectively navigate this exciting and challenging terrain.

  • Pioneering Sustainable EdTech Design: Insights from the MA DTCE

    In this episode, we explore the innovative Sustainable EdTech Design unit offered within the MA in Digital Technologies, Communication and Education (MA DTCE) at the University of Manchester. Programme Director Mark Carrigan sits down with Susan Brown, Programme Director of the MA in Education for a Sustainable Environment, and Mandy Banks Gatenby, Lecturer on the MA DTCE, to discuss their groundbreaking approach to integrating sustainability perspectives into the design and development of educational technologies.

    Susan and Mandy share insights into the unique structure of the unit, which brings together their diverse expertise to foster a dialogue between the often-conflicting worlds of agile technology development and sustainability education. They highlight the importance of creating a space for students to grapple with uncertainty, challenge assumptions, and develop the critical thinking skills needed to create educational technologies that prioritize both human and environmental well-being.

    Throughout the conversation, Susan and Mandy emphasize the transformative potential of this unit, which equips students with the mindset and practical skills to become leaders in the field of sustainable edtech design. They also discuss the wider implications of this approach, arguing for the need to embed sustainability considerations across all aspects of digital education.

    Whether you’re an educator, learning technologist, or simply passionate about the future of education and sustainability, this episode offers a fascinating glimpse into the cutting-edge of sustainable edtech design and the innovative teaching practices that are shaping the next generation of educational leaders.

  • Using Generative AI during a PhD

    Are you teaching PGRs how to use generative AI? We’ve decided to make this briefing note open, under a CC BY-ND 4.0 license. You’re free to use it as long as you don’t modify it and recognise the listed authors.

    Photo by Solen Feyissa on Unsplash

  • Superficial engagement with generative AI masks its potential contribution as an academic interlocuter

    By Mark Carrigan

    The release of OpenAI’s ChatGPT 3.5 almost two years ago inaugurated a wave of hype characterised by the same self-interested hyperbole familiar from previous tech bubbles. Except in this case there were a range of immediate use cases that suggested this was not just a hype cycle. Early reception within higher education focused on the threat to assessment integrity, as if the global scale of essay mills had not already called this into question years ago. There has been a similar fixation on research outputs in the discussion of how academics might use generative AI systems. While it is significant that we can no longer take for granted that cultural artefacts are the expression of human intelligence, this preoccupation with how generative AI might lead human works to be replaced by machine generated ones has drawn attention away from a more pressing issue: how generative AI might integrate with existing processes within higher education in positive or negative ways.

    The observation that generative AI operates in a fundamentally probabilistic manner, like ‘autocomplete on steroids’, lends itself to dismissing the practical implications of these technologies. I fell into this camp until I began to incorporate ChatGPT 4 into my work in an experimental way, rapidly finding a capacity to enhance what I was doing that I found genuinely shocking. As someone philosophically hostile to posthumanism and politically critical of platform capitalism, I was invested in explaining away these developments. At the same time I was fascinated by the speed with which they were being rolled out. I have come to see ChatGPT 4 (and more recently Claude AI) as quasi-intelligent interlocutors, who could make a significant contribution to scholarship. By ‘quasi’ I mean to stress that I have no belief these are, or ever could be, the fabled artificial general intelligence (AGI); but they are as Chris Dede describes an ‘alien, semi-intelligence’, which does something analogous to thinking. There are profound limits on what it can do, an unreliability to how it does it and a range of risks involved in how we use it. But, this does not make what it can do any less impressive.

    The point is that you need to take the time to learn what it can do, as well as how to build working routines with it. This means taking a blog post rather than a tweet as your mental model for engaging with generative AI systems, as well as approaching it as a conversation rather than a one-shot instruction whereby you simply tell the system what to do. The notion of ‘prompt engineering’ has already become overinflated, suggesting an arcane science which will lead to employment in the 2020s, much as data science was to the 2010s. There is clearly a skill to doing this nonetheless, albeit one which academics can easily learn through trial and error. Generative AI is not a tool that can be picked up and immediately used in an effective way, not least of all because of how careless use expands their inherent risk of hallucination. I would suggest academics should not use these tools unless they are willing to commit to using them in a reflexive and accountable way.

    A case in point, I have noticed a tendency for critical scholars to share examples of how their prompts elicited an underwhelming or superficial reaction from ChatGPT. The uniform feature of these examples was that little thought had gone into the prompt: they were extremely brief, failed to define the context of the request, provided no sense of the result they were expecting and certainly did not provide examples. The lacklustre quality of the ensuing response is not the devastating critique that these scholars seemingly imagine it to be. These are systems which rely on specificity and reward complexity in generating results (garbage in, garbage out as computer scientists are fond of saying). For example, I frequently share blog posts and journal articles with Claude AI to provide background context for the questions I am asking. It has a remarkable capacity to synthesise in plausible and coherent ways if appropriately guided, doing so at a speed no human can match. I share Inger Mewburn’s belief that “the best way to use ChattieG (ChatGPT) is to imagine it as a talented, but easily misled, intern/research assistant, who has a sad tendency to be sexist, racist and other kinds of ‘isms’”. Much as some academics throw files and papers at their postdoctoral researchers expecting them to work it out, so too do they throw lazily articulated requests at generative AI systems before getting frustrated when the results do not meet their (unspecified) expectations. This failure to engage in a mindful and reflective way leaves them ill-equipped to take responsibility for the destructive qualities Mewburn points to, which can be mitigated through careful engagement and review.

    My concern is that careless use of generative AI could rapidly spread within higher education, if these systems become normalised. If you approach their use in an instrumental and instructional way, as a means to outsource discrete tasks you would rather dispense, the quality of your work will suffer. It will enable you to do a mediocre version of what you would have done anyway, much more quickly than would otherwise have been possible. In contrast if you approach their use in a reflexive and dialogical way, as an interlocutor with which to develop your ideas, the quality of your work will be enhanced by the richness of generative AI’s contributions. In such a dialogue it can review, synthesise, reframe and critique with remarkable acuity once you have developed working routines which support this. It will always be more enjoyable and (usually) more productive to have these conversations with a human interlocutor. But, this does not diminish the contribution which these systems can make to the process of scholarship.

    This was originally published on the LSE Impact Blog

  • Social Media in Higher Education: What’s happening?

    The articles in this special collection edited by Katy Jordan (Lancaster) and Mark Carrigan are a response to a recent call for papers which turned the famous Twitter interface prompt – ‘What’s happening?’ – on to the broader field of social media. The rapid changes of leadership and policy at Twitter in the process of its rebranding and re-emergence as X have precipitated migration and uncertainty, and highlighted the precarity of relying on corporate infrastructure to support public scholarship. In this special collection, a series of papers examine different aspects of current social media practices in higher education, from its relationship to academic identity, to research and teaching.