Organiser: ANDREA FRONZETTI COLLADON
This session is dedicated to innovative research on text mining solutions that could be used in business settings to support managers in their strategic decisions. We are particularly interested in studies that combine methods from network science, natural language processing or machine learning with theories from the social sciences, psychology, humanities and linguistics to advance knowledge and discovery about management and information systems.
One focus of this session is research at the nexus of text analysis (including discourse analysis, content analysis, text mining, and natural language processing) and network science. The study of networks of words, representation of text-based information as graphs (e.g., knowledge graphs), and the extraction of network data from text data are also topics of interest.
The application of text mining for business has led to eminent work, e.g., on people analytics, recommender systems, collaborative work, and the diffusion of (mis)information offline and online. Another application domain is organizational communication, where actions meant to support employee communication and client interactions have been developed based on a better understanding of the impact that language has within and across organizations.
We invite abstract/paper submissions that contribute to the consolidation of text analysis and network analysis for business, or discuss new methods, applications, or theoretical approaches. We are interested in basic and applied studies.
Organiser: FRANCESCA GRECO
This session is dedicated to innovative research on text mining solutions or applications in the health domain. We are particularly interested in studies that combine methods from multivariate analysis, network science, natural language processing or machine learning with theories from the social sciences, psychology, medicine, nursery, humanities, and linguistics to advance knowledge and discovery about health context and practices.
One focus of this session is the application of text analysis (including discourse analysis, content analysis, text mining, emotional text mining, sentiment analysis, and natural language processing) to different type of textual data (social media, media, interview, focus group, etc.) and health contexts (public health, hospital, private practices, etc.). The study of pandemic, vaccination, organ donation, chronic condition, patient, and professional perception are also topics of interest.
The application of text mining in health area is increasingly popular and has led to eminent work, e.g., online communication has been successfully used for capturing diverse trends about health and disease-related issues, such as flu symptoms, sentiments toward allergies, vaccination, and people’s storytelling on Covid-19. Another application domain is the analysis of the patient, stakeholders, and health professional’s perception setting health behaviors and practices.
We invite abstract/paper submissions that contribute to the consolidation of text mining, or discuss new methods, applications, or theoretical approaches. We are interested in basic and applied studies.
Organiser: MASSIMO ARIA
Scientific Knowledge Synthesis, or Science Mapping, provides a synthesis of research common within several research domains. It evaluates all available evidence on a particular topic summarizing comprehensive literature searches through advanced qualitative and quantitative synthesis methods.
The use of science mapping methods provides more objective and reliable analyses based on statistical techniques applied to a large volume of scientific documents related to the field of interest. In this way, science mapping helps in depicting the history and the general state-of-the-art of a specific research domain.
Science mapping is a recent discipline that is rapidly spreading in all research domains. In 2020, Science published an article (Brainard J., 2020) about the key role that the statistical approaches will play in the next future as support to the scientists who are drowning in a sea of scientific publications. The COVID-19 crisis represents a striking example. In 2020, have been published more than 200.000 scientific documents about the new coronavirus, and the need for automatic tools to extract knowledge has become pressing.
This session will aim to collect contributions about new methodologies, tools, and applications for evaluating scientific knowledge considering publications, patents, grants, and grey literature.
Brainard J., (2020) Scientists are drowning in COVID-19 papers. Can new tools keep them afloat? The hunt is on for better ways to collect and search pandemic studies, Science Magazine, DOI: 10.1126/science.abc7839
Organiser: MARIO MONTELEONE
The purpose of this session is to present tools, analyses and results specific to rule-based NLP, that is to say the type of automatic natural language processing based on morphosyntax formalization methods. Quoting Max Silberztein, the aim of this session is to show how it is possible “to describe, exhaustively and with absolute precision, all the sentences of a language likely to appear in written texts. This project fulfills two needs: it provides linguists with tools to help them describe
languages exhaustively (linguistics), and it aids in the building of software able to automatically process texts written in natural language… A linguistic project needs to have a theoretical and methodological framework (how to describe this or that linguistic phenomenon; how to organize the different levels of description); formal tools (how to write each description); development tools to test and manage each description; and engineering tools to be used in sharing, accumulating, and
maintaining large quantities of linguistic resources. There are many potential applications of descriptive linguistics for NLP: spell-checkers, intelligent search engines, information extractors and annotators, automatic summary producers, automatic translators, etc. These applications have the potential for considerable economic usefulness, and it is therefore important for linguists to make use of these technologies and to be able to contribute to them.”
As known, today the vast majority of computer applications in NLP use stochastic techniques (probabilistic, statistical or neural), while very rare are those that base their analysis on formal morphosyntactically-tagged linguistic resources. Such computer applications can be defined as linguistic engineering software, an example of which is the NooJ NLP environment, created and developed by Max Silberztein himself. This type of NLP software presents very marked methodological and applicative differences with respect to the computational statistical methods applied to natural language analysis. Therefore, this special session is open to all researchers and linguists who, with the use of NooJ or other rule-based NLP software, will present advantages, peculiarities and specific functions of these techniques and procedures. Hence, we invite the submission of abstracts/papers that contribute to this type of study, with specific reference to:
- Linguistic Resources (Typography, Spelling, Syllabification, Phonemic and Prosodic
transcription, Morphology, Lexical Analysis, Local Syntax, Structural Syntax,
Transformational Analysis, Paraphrase Generation, Semantic annotations, Semantic analysis, Formal Semantic representations, etc.);
- Digital Humanities (Corpus Linguistics, Discourse Analysis, Sentiment Analysis, Literature Studies, Second-Language Teaching, Narrative content analysis, Corpus processing for the Social Sciences, etc.);
- Natural Language Processing Applications (Business Intelligence, Text Mining, Text Generation, Automatic Paraphrasing, Machine Translation, etc.).
Organisers: ILARIA PRIMERANO and GIUSEPPE GIORDANO
The special session is devoted to fostering a dialogue among scholars of different domains interested
in the analysis of textual data with a network analysis based approach.
We refer to the definition and analysis of textual data as graph structures aiming at highlighting useful
and meaningful information from documents.
Different approaches discuss the possibility to combine textual data and networks. Examples of applications derive from textual data extraction from different sources, such as digital text documents,
open-ended questions
in a survey, user-generated textual contents in social media, scientific topics
detected in bibliometric analysis of scientific papers, opinions of people and social influence, sentiment
analysis from comments, fake news detection, user stances and veracity of rumors, to name a few.
The common feature is that words are linked from a kind of relationship due to co-occurrences, co-citations, semantics, etc. Relationships can be of different kinds and in some cases multiple
relationships can be jointly present.
We would like to guest conceptual and empirical research, applicative studies as well as innovative
methodological proposals, leveraging the experiences gained in different research fields.
Topics of interest include but are not limited to:
• Trends in scientific literature
• Lexical tables as affiliation matrices
• Multilayer textual networks
• Network and sentiment analysis
• Semantic network analysis
• Shared languages
• Temporal text networks
• Text mining from social media
Organiser: ROSANNA CATALDO
In recent times, composite indicators gained remarkable popularity in a broad variety of research areas, being applied in an increasing number fields, from social to environment, to governance. Composite indicators are built combining a series of multi-domain indicators generally derived from numerical and categorical data. Nevertheless, is it possible to build a composite indicator using data from alternative sources? Given that the Internet generate new kinds of data, it is possible to integrate statistics provided by traditional sources with textual data extracted from websites and social media platforms. This session aims at understanding how text analyses can support the construction of new kinds of composite indicators, exploring the methodological and applicative challenges of such an approach.
Organiser: MARINA MARINO
This session aims at stimulating a discussion about the use of dimensionality reduction methods in Text Mining, both referring to methodological assumptions and empirical applications. Textual data available to the research community are increasingly large and complex, posing new challenges that need to be solved with attention to their characteristics. Rank reduction methods and multidimensional statistics have received considerable interest in the last decade, since these approaches are able to extract latent dimensions that further increase the information content that a dataset can offer. Moreover, reducing the dimensionality leads to a better representation of the data under examination. The dialogue will be oriented to the mathematical-statistical processes at the base of the proposals, the cases of applicability and the limits and opportunities of the applications to automatic text analysis. We invite the submission of innovative and original contributions containing methodological developments or applied studies.
Organisers: FIORENZA DERIU and FRANCESCA DELLA RATTA
The session is focused on the integration of methods, software and/or approaches to the analysis of textual data for the extraction of relevant information. The rationale of this session is mainly based on the idea that integrated approaches pay much more attention to the research problem than the standardized analysis routines do.
Putting the problem at the center of the analysis generally allows also the integration between specialized researchers and the text mining specialists at the theoretical level: in fact, the tools that the various text analysis techniques make available produce meaningful results when "handled" by experts in the field. After all, no analysis technique can be considered a substitute for the theoretical framework that allows the data interpretation: no "fact" (or data) speaks for itself, whether it is a correspondence matrix or a word cloud, because there is always a need for someone to tell the story, establishing well-founded and controllable inferences.
We invite abstract/paper submissions that contribute on combining different research approaches (e.g., hermeneutic analysis and computational linguistics or textual statistics) or at least combining different software so as to enhance the extraction of knowledge from textual data.