Organiser: ANDREA FRONZETTI COLLADON
This session is dedicated to innovative research on text mining solutions that could be used in business settings to support managers in their strategic decisions. We are particularly interested in studies that combine methods from network science, natural language processing or machine learning with theories from the social sciences, psychology, humanities and linguistics to advance knowledge and discovery about management and information systems.
One focus of this session is research at the nexus of text analysis (including discourse analysis, content analysis, text mining, and natural language processing) and network science. The study of networks of words, representation of text-based information as graphs (e.g., knowledge graphs), and the extraction of network data from text data are also topics of interest.
The application of text mining for business has led to eminent work, e.g., on people analytics, recommender systems, collaborative work, and the diffusion of (mis)information offline and online. Another application domain is organizational communication, where actions meant to support employee communication and client interactions have been developed based on a better understanding of the impact that language has within and across organizations.
We invite abstract/paper submissions that contribute to the consolidation of text analysis and network analysis for business, or discuss new methods, applications, or theoretical approaches. We are interested in basic and applied studies.
Organiser: FRANCESCA GRECO
This session is dedicated to innovative research on text mining solutions or applications in the health domain. We are particularly interested in studies that combine methods from multivariate analysis, network science, natural language processing or machine learning with theories from the social sciences, psychology, medicine, nursery, humanities, and linguistics to advance knowledge and discovery about health context and practices.
One focus of this session is the application of text analysis (including discourse analysis, content analysis, text mining, emotional text mining, sentiment analysis, and natural language processing) to different type of textual data (social media, media, interview, focus group, etc.) and health contexts (public health, hospital, private practices, etc.). The study of pandemic, vaccination, organ donation, chronic condition, patient, and professional perception are also topics of interest.
The application of text mining in health area is increasingly popular and has led to eminent work, e.g., online communication has been successfully used for capturing diverse trends about health and disease-related issues, such as flu symptoms, sentiments toward allergies, vaccination, and people’s storytelling on Covid-19. Another application domain is the analysis of the patient, stakeholders, and health professional’s perception setting health behaviors and practices.
We invite abstract/paper submissions that contribute to the consolidation of text mining, or discuss new methods, applications, or theoretical approaches. We are interested in basic and applied studies.
Organiser: MASSIMO ARIA
Scientific Knowledge Synthesis, or Science Mapping, provides a synthesis of research common within several research domains. It evaluates all available evidence on a particular topic summarizing comprehensive literature searches through advanced qualitative and quantitative synthesis methods.
The use of science mapping methods provides more objective and reliable analyses based on statistical techniques applied to a large volume of scientific documents related to the field of interest. In this way, science mapping helps in depicting the history and the general state-of-the-art of a specific research domain.
Science mapping is a recent discipline that is rapidly spreading in all research domains. In 2020, Science published an article (Brainard J., 2020) about the key role that the statistical approaches will play in the next future as support to the scientists who are drowning in a sea of scientific publications. The COVID-19 crisis represents a striking example. In 2020, have been published more than 200.000 scientific documents about the new coronavirus, and the need for automatic tools to extract knowledge has become pressing.
This session will aim to collect contributions about new methodologies, tools, and applications for evaluating scientific knowledge considering publications, patents, grants, and grey literature.
Brainard J., (2020) Scientists are drowning in COVID-19 papers. Can new tools keep them afloat? The hunt is on for better ways to collect and search pandemic studies, Science Magazine, DOI: 10.1126/science.abc7839
Organiser: MARIO MONTELEONE
The purpose of this session is to present tools, analyses and results specific to rule-based NLP, that is to say the type of automatic natural language processing based on morphosyntax formalization methods. Quoting Max Silberztein, the aim of this session is to show how it is possible “to describe, exhaustively and with absolute precision, all the sentences of a language likely to appear in written texts. This project fulfills two needs: it provides linguists with tools to help them describe
languages exhaustively (linguistics), and it aids in the building of software able to automatically process texts written in natural language… A linguistic project needs to have a theoretical and methodological framework (how to describe this or that linguistic phenomenon; how to organize the different levels of description); formal tools (how to write each description); development tools to test and manage each description; and engineering tools to be used in sharing, accumulating, and
maintaining large quantities of linguistic resources. There are many potential applications of descriptive linguistics for NLP: spell-checkers, intelligent search engines, information extractors and annotators, automatic summary producers, automatic translators, etc. These applications have the potential for considerable economic usefulness, and it is therefore important for linguists to make use of these technologies and to be able to contribute to them.”
As known, today the vast majority of computer applications in NLP use stochastic techniques (probabilistic, statistical or neural), while very rare are those that base their analysis on formal morphosyntactically-tagged linguistic resources. Such computer applications can be defined as linguistic engineering software, an example of which is the NooJ NLP environment, created and developed by Max Silberztein himself. This type of NLP software presents very marked methodological and applicative differences with respect to the computational statistical methods applied to natural language analysis. Therefore, this special session is open to all researchers and linguists who, with the use of NooJ or other rule-based NLP software, will present advantages, peculiarities and specific functions of these techniques and procedures. Hence, we invite the submission of abstracts/papers that contribute to this type of study, with specific reference to:
- Linguistic Resources (Typography, Spelling, Syllabification, Phonemic and Prosodic
transcription, Morphology, Lexical Analysis, Local Syntax, Structural Syntax,
Transformational Analysis, Paraphrase Generation, Semantic annotations, Semantic analysis, Formal Semantic representations, etc.);
- Digital Humanities (Corpus Linguistics, Discourse Analysis, Sentiment Analysis, Literature Studies, Second-Language Teaching, Narrative content analysis, Corpus processing for the Social Sciences, etc.);
- Natural Language Processing Applications (Business Intelligence, Text Mining, Text Generation, Automatic Paraphrasing, Machine Translation, etc.).
Organisers: ILARIA PRIMERANO and GIUSEPPE GIORDANO
The special session is devoted to fostering a dialogue among scholars of different domains interested in the analysis of textual data with a network analysis based approach. We refer to the definition and analysis of textual data as graph structures aiming at highlighting useful and meaningful information from documents. Different approaches discuss the possibility to combine textual data and networks. Examples of applications derive from textual data extraction from different sources, such as digital text documents, open-ended questions
in a survey, user-generated textual contents in social media, scientific topics detected in bibliometric analysis of scientific papers, opinions of people and social influence, sentiment analysis from comments, fake news detection, user stances and veracity of rumors, to name a few. The common feature is that words are linked from a kind of relationship due to co-occurrences, co-citations, semantics, etc. Relationships can be of different kinds and in some cases multiple relationships can be jointly present. We would like to guest conceptual and empirical research, applicative studies as well as innovative methodological proposals, leveraging the experiences gained in different research fields.
Topics of interest include but are not limited to:
• Trends in scientific literature
• Lexical tables as affiliation matrices
• Multilayer textual networks
• Network and sentiment analysis
• Semantic network analysis
• Shared languages
• Temporal text networks
• Text mining from social media
Other Special Sessions will be announced soon!