Modern scientific research has grown exponentially over the past several decades. Growth in research articles, patents, preprints, white papers and informal content on the web (e.g., company product descriptions) has become “big data”, an information overload for scientists, engineers and also the government agencies, foundations and companies that support them. More than simply a change in scale, science increasingly constitutes a complex system: apparently complicated, involving strong interaction between components, and predisposed to emergent or unexpected collective outcomes. The growing number of global scientists and research teams, increasingly connected via multiple channels—international conferences, online publications, e-mail, and science blogs—have increased and multiplied this complexity. Moreover, intensifying specialization in science and engineering disciplines has made it more difficult for scientists to communicate and collaborate, and for evaluators to judge the promise and progress of research investments in those areas.

All of these changes in this 400 Billion dollar enterprise suggest the importance of quantitative and automated approaches to science that take into account the deluge of information that no individual scientist can master. The increasing availability of large-scale datasets that capture major activities in science—publications, patents, citations, grant proposals, as well as detailed meta-data associated with them—has created an unprecedented opportunity to explore in a quantitative manner the patterns of scientific production and reward. In contrast with standard bibliometric studies, the recent surge in quantitative studies of science is characterized by a few distinct flavors: (i) They typically rely on large-scale datasets to study science, ranging from hundreds of thousands to millions of authors, papers and their citations; (ii) Instead of evaluating metrics, they use models to more deeply probe the mechanisms driving science, from knowledge production to scientific impact, systematically distinguishing predictable from random patterns; (iii) More quantitative studies of science no longer hold the unique goal of evaluating and improving the system of science. Rather, researchers from a wide range of disciplines have begun to use science as an observatory to probe social phenomena that are more universal and widely applicable than the institutions of science themselves. As such, the tools and perspectives vary, involving social scientists, information and computer scientists, economists, physicists and mathematicians, with results published in venues with non-overlapping readership.

The goal of this satellite is to bring together leading researchers from various disciplines and form discussions on the proliferating subject of quantifying science. We specifically look for contributions that satisfy one or more of the aforementioned flavors.

Areas of interest include but are not limited to the following focused topics:

  • Dynamical and structural properties of citations
  • Patterns behind a (successful) scientific career
  • How institutions shape scientific production
  • Collaborations and team sciences
  • Emergence and life course of concepts
  • Evolution of knowledge and predictions of future knowledge
  • Altmetrics
  • Peer review process in science
  • Crowdsourcing science


Submission deadline: July 10th, 2015


Submit!