Multimodal Learning for Social Good at ICME 2024

    Mutimodal Learning for Social Good Workshop 2024

    Overview

    The first Multimodal Learning for Social Good (MML4SG) workshop will be held as part of the 2024 IEEE International Conference on Multimedia and Expo (ICME). This workshop aims to gather diverse perspectives to explore multimodal learning's potential, especially in foundation models, to tackle complex societal challenges. Key questions include and are not limited to: How will foundation models transform computational social science? How can multimodal learning approaches enhance our understanding and management of public health crises? In what ways can multimodal learning help in detecting and countering misinformation online? Can multimodal learning be used to identify and mitigate the spread of polarizing content across social media platforms? Furthermore, how can multimodal learning assist in combating climate change and environmental degradation? It aims to facilitate exchange of ideas, identify future research directions, and foster collaborations among researchers and practitioners in from various areas such as AI, computational social science, public health, and environmental studies.

    The topics of multimodal learning and social benefits are particularly relevant now due to the rapid advancements in AI technologies, specifically Large Language Models (LLMs) and Large Multimodal Models (LMMs), which have significantly broadened the scope of AI applications. These areas are currently at the forefront of global concerns, and AI researchers are increasingly focused on applying advanced AI techniques to provide innovative solutions that are not only technologically advanced but also socially beneficial.

    Invited Speaker

    image
    Max Lu
    Computer Science, Massachusetts Institute of Technology

    Max Lu is a final-year PhD candidate in the EECS department at MIT, specializing in AI and ML research. Working with Professor Faisal Mahmood, he focuses on representation learning, foundation models, and multimodal generative AI in computational pathology. Max is also the Founding Chief Scientific Officer of Modella AI. He holds a B.S. in Biomedical Engineering and Applied Mathematics & Statistics from Johns Hopkins University and an M.S. in Computer Science from MIT and has been a recipient of the Tau Beta Pi and Siebel Scholar PhD Fellowships.

    Schedule

    Date & Time: July, 19 at 13:00 (Canada/Toronto)

    Location: Salon A 3F at Niagara Falls Marriott on the Falls

    5 min Intro

    13:05 Keynote Session

    Max Lu. Data-driven Multimodal Foundation Models for Computational Pathology.

    14:00 Contributed Session (13 min talk + 2 min Q&A)

    Dristi Datta, Manoranjan Paul, Manzur Murshed, Shyh Wei Teng, and Leigh M. Schmidtke. Unveiling Soil-Vegetation Interactions: Reflection Relationships and an Attention-Based Deep Learning Approach for Carbon Estimation. [PDF]

    Hao Liu, Lijun He, and Jiaxi Liang. Joint Modal Circular Complementary Attention for Multimodal Aspect-Based Sentiment Analysis. [PDF]

    Pantid Chantangphol, Sattaya Singkul, Thanawat Lodkaew, Nattasit Maharattanamalai, Atthakorn Petchsod, Theerat Sakdejayont, and Tawunrat Chalothorn. An Enhanced Multimodal Negative Feedback Detection Framework with Target Retrieval in Thai Spoken Audio. [PDF]

    Liman Wang* and Hanyang Zhong*. LLM-SAP: Large Language Models Situational Awareness-Based Planning. [PDF]

    Yundi Zhang, Xin Wang, Ziyi Zhang, Xueying Wang, Xiaohan Ma, Yingying Wu, Han-Wu-Shuang Bao, Xiyang Zhang. Using Large Language Models to Understand Leadership Perception and Expectation. [PDF]

    Call for Papers

    We welcome submissions of technical papers from the field of multimodal learning, social science, and beyond, to explore the integration of advanced multimedia technologies in addressing pressing societal challenges. The topics of interest include (but are not limited to):

    • Accessibility in Multimedia Content
    • Benchmark Datasets
    • Crisis Response and Humanitarian Aid
    • Cross-Cultural and Multilingual Analysis
    • Digital Humanities and Social Sciences
    • Ethical Considerations and Societal Impacts of Deploying Multimedia Technologies
    • Human-computer Interaction in Mulimodal Interfaces
    • Impact of Deepfake in Multimedia Journalism
    • Inclusivity and Diversity in Social Media
    • Innovative Multimodal Data Analysis Techniques
    • Integration of AI and IoT for Environmental Monitoring
    • Multimodal Content Generation for Social Good
    • Multimodal Health Informatics
    • Multimodal Sentiment Analysis and Public Opinion Mining
    • Novel Evaluation Metrics and Methods
    • Public Policy Deployment
    • Social Media and Mental Health
    • Social Movement and Activism Analysis
    • Use of LLMs/LMMs in Understanding Cultural Societal Trends
    • User Behavior Analysis on Social Platforms
    • VR/AR for Education and Training

    All submissions should present original, unpublished work that is relevant to the workshop's themes. The review will be double-blind. Authors should prepare their manuscript according to the Guide for Authors of ICME available at Author Information and Submission Instructions: https://2024.ieeeicme.org/author-information-and-submission-instructions/

    Submission address: https://cmt3.research.microsoft.com/ICME2024W

    Track name: ICME2024-Workshop-MML4SG

    Important Dates

    Submission due
    March 27, 2024
    Acceptance notification
    April 17, 2024 April 19, 2024
    Camera-ready
    May 31, 2024
    Workshop date
    July 19, 2024

    Note: All times are AoE (Anywhere on Earth).