Submissions

Submission Preparation Checklist

As part of the submission process, authors are required to check off their submission's compliance with all of the following items, and submissions may be returned to authors that do not adhere to these guidelines.
  • The submission has not been previously published, nor is it before another journal for consideration (or an explanation has been provided in Comments to the Editor).
  • The submission file is in OpenOffice, Microsoft Word, or RTF document file format.
  • Where available, URLs for the references have been provided.
  • The text is single-spaced; uses a 12-point font; employs italics, rather than underlining (except with URL addresses); and all illustrations, figures, and tables are placed within the text at the appropriate points, rather than at the end.
  • The text adheres to the stylistic and bibliographic requirements outlined in the Author Guidelines.

Author Guidelines

We are inviting  authors to review a research tool or a data service with CKIT.

The review should be 1500 to 4000 words in length and can be written in English or German. The text may contain references, as long as they support the review’s argument, and a bibliography. Both are not included in the word count. A review could include tests of the software, and/or test datasets. Such datasets should be referenced and can be also published by CKIT. Images may be included, where necessary. Images may be screenshots of certain features of the GUI, or flowcharts explaining the software structure. Code snippets are welcome and will be treated as text elements, but highlighted through formatting.

As CKIT is a new journal, the publication workflow is under active development. For the time being, contributions should be submitted as MarkDown (.md) files. Reviews will eventually be available in PDF and HTML formats on the journal website.

These submission guidelines are intended as support for the authors. They outline the general structure of a review and the main questions to be addressed. Furthermore, they specify important aspects that the author(s) should take into account, related to the readers' interest. As tools and data services in the humanities have many use cases and applications, and as research practice may vary in every domain, the author has the freedom to choose the specific aspects they would like to emphasize. Still, there are certain aspects that necessarily need to be part of any review. These are labelled as “mandatory” below.

Title / Abstract / Metadata (mandatory)

  • Author/s of the tool, name of tool, version, repository (GitHub etc.) and/or DOI Author/s of the data service, name of data service, URL (date of last visit)
  • Abstract (in English and German) of the review that gives a comprehensive overview of the review. The length should not exceed 130 words for each language.
  • Keywords/tags

Conceptual description and evaluation

  • Mandatory or highly recommended: Scope of the tool / data service: which research questions does the software support/address? Which type of research or methodological approach does the tool or service support? “The data service offers data which are (predominantly) used in the research of / scientific work with… This research question, scientific approach fits into the domain of…” This provides the reader with a complete overview of the scientific scope of the software, and enables them to assess if the software or data service fits their field of research. 
  • Mandatory or highly recommended: Abstract description of the methodological approach: what the research tool does / what the data service offers. The reader should get a clear picture of how the software processes data. For instance, if the tool provides a spatial interpolation, the underlying assumptions and computations should be highlighted. Sometimes this is not possible, as it is not clearly documented - in this case, the review should state this. Based on this description, the reader should be able to decide if the tool or service fits their respective methodological approach and/or requirement for scientific rigor.
  • Recommended: Wherever possible, former projects with a similar or a different approach in the field or theoretical work on the same problem should be mentioned to highlight the conceptual work of the developers or the research team behind the tool / data service. The reader should be able to place the specific conceptual approach of the reviewed software within the research landscape.
  • Recommended: Wherever possible, information on research projects or institutions that actively use the tool / the data service should be given. This enables the reader to consider additional use cases and provide information on potential points of contact within the research community.
  • Mandatory: A balanced and justified evaluation of the tool / data service by the author/s, discussing if the concept of the tool / the data service is adequate for the scope described above. As this is the opinion of the reviewer it should be clearly labelled as such, and the competences of the reviewer should be stated (it is not required to be an expert in the field of research discussed, but a familiarity with the field is obligatory). The statement has to be justified.

The user experience

  • Mandatory: What was the setup/testing environment of the reviewer? The behaviour of a tool might vary considerably according to the technical setup it runs on/in. For  Web applications, browser types may behave differently, and native software may behave differently  in a Linux or a Windows environment. Therefore, a short description like the following adds transparency to the review. “The tool was tested in the following environment: MacOS 12, installed as Homebrew package, interface accessed through Chrome browser 13.5.1.”
  • Mandatory: How to use the tool or data service? Is it a WebApp? Is an installation needed? Are multiple users supported? What are the technical requirements? As these are crucial features for some working environments, they have to be presented clearly but not necessarily at length. If the author/s are convinced that this aspect is an important disadvantage, this should be justified with a use case. Please be aware that Web apps might not work on every browser and that this can affect the usability in some environments. 
  • Recommended: Installation workflow, complexity, support. Wherever possible, the reviewer/s should test the installation on different machines. The workflow (where to get the software, what to do, how complex etc.) should be presented in relation to the assumed user, and in relation to the scope of the software. If reviewers are restricted to a single environment, reasonable assumptions should be made about interoperability (e.g. “documentation states that Windows is also supported, untested”). An estimation of the installation’s accessibility can only be carried out considering the scope of the software and the assumed user.
  • Mandatory: How does it work - description of the main features. In contrast to the conceptual description above, this is a space to give an impression of using the software and working with the tool / data service.
  • Mandatory: Data input and output - what kind of formats (standards) does the tool / data service process? Mass uploads? APIs? How is the data connected to other resources or information? Is the use of or linking to authority files (GND, VIAF, etc.) possible? Are controlled vocabularies used and which ones? As these are crucial features for some working environments, they have to be presented clearly but not necessarily at length. If the reviewer is convinced that this aspect is an important disadvantage, this should be justified with a use case.
  • Mandatory: Under which license is the software released? Are there any restrictions on how it can be used? If it is a service, which license is used for the provided data? Are there any costs? Which institution ensures the long-term usability of the software/service? Are there any funding sources? Is there an active community that ensures further development? Services and software must be available in the long term to ensure the reproducibility or explanation of research results. Therefore, it is important to estimate to what extent the active maintenance and further development is secured.
  • Recommended: User interface - usability, for GUIs: responsive design. As the user interface is crucial for the usability and a too complex design facilitates user errors, this is an important feature. An estimation of the usability can only be carried out in knowledge of the scope of the software and the assumed user.
  • Recommended: Tutorials, help function, community. Are there tutorials and do they fit the purposes? Are there FAQs, helpdesks etc.? How helpful are the help functions and the support of the tool / data service? Are they multilingual? This section is especially helpful for the reader, who might be in doubt about the skills of her/himself or hers/his team. An estimation of the usability can only be carried out in knowledge of the scope and the assumed user.
  • Recommended: Services (research software). Are there institutions that offer the software as a service? For some readers, this will be crucial information as it might change a negative impression based on a potentially too complex installation or too complex customization into a positive one

The developer’s view

This section that is entirely devoted to the review of research tools can hardly be served sufficiently without an advanced understanding of programming. Depending on the purpose of the reviewed object, some sections and contents are changing between “mandatory” and “recommended”. Furthermore, the notion of for example a building process is sufficient and does not need further explanations. The main objective is to give the reader information about how s/he can contribute to or modify the software/service.

  • Mandatory: Is the code available? If yes, where is it hosted? Who are the main contributors? Is it under active development? Is it a community driven development or institutional? Does a contributor license agreement exist? Next to recent changes in the code, indicators for active development might also be found in pull requests, activities of the developers on Q&A-websites related to the tool, or personal conversations.
  • Mandatory or highly recommended: Is the documentation (for developers)  available (mandatory) and is the documentation complete and understandable (recommended)? Sometimes, this dues often to rather small tools and or tools, which have been under constant development for quite some time now, the documentation is to be found as comments in the code. 
  • Recommended: Is the code readable (following established formatting standards), and/or extendable?
  • Recommended: Is the software tested (e.g. by means of unit tests)?
  • Recommended: For software libraries (i.e. non-standalone packages), does the API follow reasonable standards (e.g. scikit-learn-like integration for ML libraries)?