Our academic paper reading group exists to foster intellectual growth, cross-disciplinary understanding, and scholarly collaboration through the collective exploration of academic literature focused on responsible AI, trustworthy AI, AI safety, and related fields. We aim to:
- Create a supportive environment where members can engage with challenging ideas in AI ethics and safety
- Develop critical reading and analytical skills through structured discussion of cutting-edge AI research
- Bridge knowledge gaps between technical AI development and ethical considerations
- Build a community of informed practitioners committed to advancing responsible AI
- Translate theoretical AI safety concepts into practical product decisions and implementations
- Frequency: Monthly meetings (adjustable based on group preferences)
- Duration: 60 minutes per session
- Format:
- 20 minutes: Introduction of the paper by the facilitator
- 25 minutes: Guided discussion
- 15 minutes: Open discussion, practical implications, and follow up
-
Preparation
- Read the selected paper thoroughly before each meeting
- Note key points, questions, and connections to other work
- Consider preparing 1-2 discussion questions
-
Discussion Participation
- Contribute thoughtfully to the conversation
- Reference specific sections or arguments from the paper
- Listen actively to others' perspectives
- Respectfully challenge ideas, not individuals
-
Facilitation (Rotating Responsibility)
- Select a paper 2 weeks in advance of your facilitation date
- Prepare a brief introduction highlighting key concepts
- Develop 3-5 discussion questions to guide conversation
- Moderate the discussion to ensure balanced participation
- Papers should be accessible to a multidisciplinary audience of both technical and non-technical members
- Preference for papers that present novel ideas or methodologies in responsible AI development
- Consider papers from underrepresented perspectives or regions to ensure diverse viewpoints on AI ethics
- Alternate between foundational works in AI safety and cutting-edge research on emerging AI risks
- Select papers with clear implications for product development and implementation
- Focus on research that addresses real-world AI deployment challenges and governance frameworks
- Aim for papers that can be thoroughly read in 1-2 hours
This reading group specifically focuses on academic literature that can inform and influence AI product decisions through:
- Ethical AI Development: Examining frameworks, methodologies, and case studies for building AI systems that respect human values, fairness, and dignity
- Technical Safety Mechanisms: Exploring technical approaches to alignment, interpretability, robustness, and security in AI systems
- Governance & Oversight: Reviewing proposed governance structures, auditing mechanisms, and regulatory approaches
- Human-AI Interaction: Understanding research on user experience, transparency, and appropriate trust calibration
- Societal Impact Assessment: Analyzing methods for evaluating and mitigating potential societal harms from AI deployment
- Implementation Strategies: Bridging the gap between theoretical AI safety research and practical product implementation
Our discussions will deliberately connect academic insights to concrete product decisions, aiming to develop actionable recommendations that can shape more responsible AI development practices within our organizations.
- Use the discussion board for announcements
- Maintain a repository of previously discussed papers
- Share relevant resources and follow-up readings
- Provide feedback to improve the reading group experience
- Respect diverse academic backgrounds and expertise levels
- Create space for quieter members to contribute
- Explain discipline-specific terminology when necessary
- Acknowledge the value of dissenting viewpoints
To join the reading group, simply:
- Review our mission statement and participation guide
- Reach out to the organization admin Dominik Dahlem
- Attend your first session prepared to discuss the current paper
- Consider when you might be willing to facilitate a future discussion
We look forward to engaging with fascinating research and diverse perspectives together!