Skip to Main Content

How to Publish and Disseminate Research: Before you publish

A guide to publishing and disseminating your research.

Banner image source: Pixabay 12345 licensed under a Pixabay Content License.

Is your research project governed by an agreement?

For example:

Grant Funding Agreement - where you have been awarded grant funding and there is an associated agreement that sets out the terms and conditions of the grant.

Sponsored Research Agreement - where you are being paid by a company, government department or other institution to undertake particular research work, and there is an associated agreement that sets out the terms and conditions surrounding that research.

Research Collaboration Agreement - where you are working collaboratively with a company, government body or other institution to jointly conduct a research project or program of research and there is an associated agreement that sets out the terms and conditions of the collaboration.

Confidentiality Agreement - where you have received confidential information from a company, government department or other institution and there is an associated agreement that tells you what you can and cannot do with the information. 

Material Transfer Agreement - where you have received certain materials from a company, government department or other institution and there is an associated agreement that tells you what you can and cannot do with the material.

The terms and conditions of an agreement can impact what and when you are allowed to publish.  It is important to be aware of the agreement landscape surrounding your research so you know from the start what your responsibilities are and what you should and should not be doing.

Be familiar with:

  • The Australian Code for the Responsible Conduct of Research
    The 2018 Code replaces the 2007 Code. It has been streamlined into a principles-based document that will be supported by supplementary guidance.
    The Code presents eight clear principles of responsible research and 29 key responsibilities for researchers and institutions. Of particular interest should be Responsibilities of researchers R23, R25 and R26.
     
  • Research Integrity Policy
    This policy provides a framework for sound research practice. Of particular interest should be sections 9 Authorship and 10 Publication and dissemination of research.

 

Acknowledging use of Large Language Models (LLM) or other generative Artificial Intelligence (AI) tools in your paper

If you have or you plan to use AI tools, including LLMs like ChatGPT, ensure that you check the relevant journal policies about use and acknowledgement. 

Most scholarly publishers agree on the following general principles:

1. An LLM can not be a study author

It is considered that it is a requirement of morally responsible publishing that authors must be accountable for what they write, and generative AI tools are not accountable.

Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Nature Authorship Policy

Authors must be aware that using AI-based tools and technologies for article content generation, e.g. large language models (LLMs), generative AI, and chatbots (e.g. ChatGPT), is not inline with our authorship criteria. All authors are wholly responsible for the originality, validity and integrity of the content of their submissions. Therefore, LLMs and other similar types of tools do not meet the criteria for authorship. Taylor & Francis Editorial Policy on Authorship

2. Authors should be transparent about their use of generative AI

Authors who use generative AI in the development of papers should disclose their use to editors, reviewers and readers. As generative AI is constantly changing there are currently are no set rules for how generative AI use should be disclosed, however the editor authors and the signatories to the "The Editor's Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing" have recommended that, "disclosure should describe how the AI was used and should identify AI-generated content. Authors should err on the side of too much transparency rather than too little: when in doubt disclose" (Kaebnick et al., 2023, p.4).

Some journals, including the esteemed Science, require that permission be obtained from the journal's editors for the inclusion of any AI-generated content,

Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be products of such tools, without explicit permission from the editors. Science Journals Editorial Policies

3. Content generated with assistance from AI tools must be acknowledged in sections of the paper other than the author list.

The Editor's Statement, suggests that, "some ways of disclosing the use of generative AI could include describing the use in a paper's introduction, methods section, appendix, or supplemental material or citing the generative AI tool in the notes or references" (Kaebnick et al., 2023, p.4).

You should, however, always check the journal's instructions for authors.

Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. Nature submission guidelines

Any assistance from AI tools for content generation (e.g. large language models) and other similar types of technical tools which generate article content, must be clearly acknowledged within the article. it is the responsibility of authors to ensure the validity, originality and integrity of their article content. Authors are expected to use these types of tools responsibly and in accordance with our editorial policies and principles of publishing ethics. Taylor & Francis Editorial Policy on Authorship

The manner in which use of AI in a study is described in a paper may vary from discipline to discipline. For example, where relevant, Mathematics and Physical Sciences papers should discuss and reference the following as a minimum, to assign attribution and enable readers to re-create the outcome: 

  • Source (e.g. OpenAI/Microsoft, Anthropic/Google, NVIDIA etc. )
  • Model (e.g. GPT 3)
  • Implementation (e.g. davinci-003)
  • Fine-tuning*

*Where the user has fine-tuned the 'inbuilt' knowledge of LLMs based on their own libraries of content via APIs or other processes.

Reference:

Kaebnick, G. E., Magnus, D. C., Kao, A., Hosseini, M., Resnik, D., Dubljević, V., Rentmeester, C., Gordijn, B., & Cherry, M. J. (2023). Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing. The Hastings Center Report53(5), 3–6. https://doi.org/10.1002/hast.1507

Contact for support

Email your questions to our friendly library staff.

Students

web.uwa.edu.au/askuwa

HDR Students

hdrsupport-lib@uwa.edu.au

UWA Staff

staffsupport-lib@uwa.edu.au

More contact options are available on the Library Contact us page.

Where to Publish webinar recording

This is a recording of a Where to Publish webinar. It's more important than ever to be strategic about where you publish your research; to maximise reach and potential impact, as well as ensure you are working with quality and legitimate publishers.

This webinar covers:

  • Identifying and assessing journals, book publisher, NTROs and conference papers for quality, reach and potential impact
  • Open access publishing
  • Avoiding and identifying predatory publishers

CONTENT LICENCE

 Except for logos, Canva designs or where otherwise indicated, content in this guide is licensed under a Creative Commons Attribution-ShareAlike 4.0 International Licence.