Hi! CEUR-WS.org started in 1995 with the explicit goal to help workshop organizers to get their proceedings published. At that time commercial #closedaccess publishers largely neglected workshop proceedings.This has changed in the past 5 years.
As a consequence, we see a number of submissions to CEUR-WS.org that are labelled as “Short paper proceedings” or “Demo & Poster papers”.
I have some reservations whether this is a good development. These papers are not published in the main conference volume and not even in a 2nd tier proceedings volume. The purpose of publishing such papers may be to attract their authors to come to the conference and pay the fee.
There are of course exceptions, like “challenge workshops” that solicit only short papers that are focussed on a narrow subject such as showing a solution to a common challenge defined by the workshop organizers.
But for the rest, I am wondering whether CEUR-WS.org should continue to publish short-paper proceedings.
Comments are welcome!
Technische Informationsbibliothek (TIB), located at the University of Hannover in Germany, has been entrusted by the German National Library (DNB) to provide long-term archival of technical publications, which includes computer science.
#CEURWS (CEUR-WS.org, a grassroot open-access publisher) came with TIB to an agreement that ensures the long-term archival of eventually all proceedings volumes published at CEUR-WS.org. The long-term archival is aimed at future generations being able to retrieve publications from past decades or even centuries. It is not online accessible.
In addition to the long-term archival, TIB shall also provide a mirror for newly published proceedings volumes from CEUR-WS.org. This gives proceedings editors extra confidence that their volumes are accessible even when CEUR-WS might come to the end of its operation (which is in the fa future, I believe).
More details are at
I like to thank TIB for this great service! Long-term archival has been a weak spot of CEUR-WS in the past. It is now ensured by the most professional team you can imagine.
I recently attended the local #openaccess week at our university. My great colleague Thomas gave a presentation on PDF/A and tools to test for compliance of PDF/A. I learned that that are many versions of PDF and also of PDF/A. PDF/A is meant for long-term archiving documents. But there isn’t even an agreement on the precise interpretation of the PDF/A rules. For example, PDF/A is very picky on (unencrypted) metadata. So, if you include a PDF (or JPG) image inside a PDF/A document, then different experts have different opinions on whether the embedded image must come with metadata.
Since I prefer myself LaTeX, I was wondering whether the PDF produced by LaTeX is compliant to PDF/A. Well, it usually is NOT compliant. In particular, it seems very difficult to create PDF/A-1 compliant code via LaTeX. The situation is technically better when using MS-Word or LibreOffice. However, even then most PDF documents do not come with proper metadata because authors do not care.
So, what is the value of PDF/A for science when we hardly can produce it? For CEUR-WS.org, we would be interest to facilitate long-term archival. But I am sceptical about the contribution of PDF/A. It is a format from the printer age.
Formats like HTML, SGML, XHTML may be more promising since their focus is on content rather than fonts and a page layout.
A HTML (or similar) document can directly link to references, data sets, and tools. It may even be queried on its content.
What is your view on PDF and PDF/A?