From Theory to Practice: Special Session on Large Language and Foundation Models
Location: Birmingham, UK
Conference: DSAA 2025 (IEEE International Conference on Data Science and Advanced Analytics)
Date: October 9-13, 2025 (exact date to be announced)
Large language and foundation models have rapidly emerged as pivotal technologies in data science and analytics, offering unprecedented capabilities in text generation, knowledge extraction, and complex decision-making. This special session seeks to bridge cutting-edge theory with real-world applications, providing a venue for researchers and practitioners to exchange novel methodologies, deployment strategies, and impact-driven insights. By spot-lighting both breakthrough techniques and operational challenges, the session aims to foster cross-pollination of ideas, accelerate innovation, and elucidate pathways for seamless integration of large language models into diverse data-driven ecosystems.
- Submission Deadline: May 2nd, 2025
- Paper Notification: July 24th, 2025
- Paper Camera-Ready: August 21st, 2025
- Contact:
amllab[at]bit.uni-bonn.de
Submission
To submit a paper to SSLLFM2025, go to: OpenReview, and select the “Special Session: From Theory to Practice: Special Session on Large Language and Foundation Models” track.
The length of each paper submitted to SSLLFM2024 should be no more than ten (10) pages and should be formatted following the standard 2-column U.S. letter style of IEEE Conference template. For further information and instructions, see the IEEE Proceedings Author Guidelines.
All submissions will be double-blind reviewed by the Program Committee based on technical quality, relevance to the special session’s topics of interest, originality, significance, and clarity. Author names and affiliations must not appear in the submissions, and bibliographic references must be adjusted to preserve author anonymity. Submissions failing to comply with paper formatting and authors anonymity will be rejected without reviews.
Because of the double-blind review process, non-anonymous papers that have been issued as technical reports or similar cannot be considered for SSLLFM2025. An exception to this rule applies to arXiv papers that were published in arXiv at least a month prior to SSLLFM2025 submission deadline. Authors can submit these arXiv papers to SSLLFM2025 provided that the submitted paper’s title and abstract are different from the one appearing in arXiv.
Call for Papers
The topics of interest are, but not limited to:
- Model Training and Optimization:
- Techniques to deal with hallucinations
- Training data for LLMs
- Efficient and stable techniques for training and finetuning LLMs
- Scalable approaches for distributed model training
- Middleware for scale out data preparation for LLM training
- Workflow orchestration for end-to-end LLM life cycle
- Resource management for compute and energy efficient model training
- Representation learning
- Model Utilization and Integration:
- Using LLMs effectively as tools for Reinforcement Learning or search
- Enhancing LLM capabilities by using external tools such as search engines
- Visual Prompt Tuning and in-context learning
- Enable easy experimentation with high utilization to train foundational models in the cloud
- Strategies to scale resources for training/fine-tuning foundational models
- Instruction tuning including generation of instruction tuning data
- Parallel training: data model tensor (attention and weights)
- Distributed workflows for data cleansing and model usage (Langchain)
- Principled AI
- Investigating reasoning capabilities of LLMs
- Retrieval Augmented Generation
- Alternative architectures such as State Space Models
- Compact Language Models and Knowledge Distillation:
- Knowledge representations for training small/compact language models
- Evaluation of different teacher-student distillation and model compression strategies
- Techniques for efficient data encoding to maintain linguistic properties in compact models
- Deployment of lightweight models in resource-constrained environments
- Case studies on the effectiveness in various NLP tasks
- Application-Specific Models:
- Math LLMs
- Multimodal Foundation Models
- Trustworthy Foundation Models
- Large-scale Visual Foundation Models
- Timeseries foundation models for forecasting, prediction and control
- Multi-Agent System using LLMs
- Recommender systems using LLMs
- Knowledge management using LLMs
- Knowledge Incorporation and Adaptation:
- Approaches to deal with knowledge recency to effectively update knowledge within LLMs
- Incorporating domain knowledge in LLMs
- Evaluation and Benchmarking:
- Additional benchmarks to fill gap between human and automatic reference-based evaluation
Proceedings and Indexing
All accepted full-length special session papers will be published by IEEE in the DSAA main conference proceedings under its Special Session scheme. All papers will be submitted for inclusion in the IEEEXplore Digital Library.
Organizers
- Prof. Dr. Rafet Sifa (University of Bonn, Germany)
- Dr. Dhavel Patel (IBM Research, USA)
- Tobias Deußer (Fraunhofer IAIS, Germany)
- Dr. Lorenz Sparrenberg (University of Bonn, Germany)