The Space Republic’s Policy on AI Use
At The Space Republic, we believe that original, human-led analysis is at the core of trustworthy journalism. Accuracy, critical thinking, and depth remain our guiding values—even as we integrate new tools like AI into our workflow.
We acknowledge that artificial intelligence is becoming a staple in newsrooms worldwide. Used mindfully, it can support—not replace—journalistic work. But the responsibility for editorial decisions, reporting, and fact-checking remains fully in human hands.
Our stories are AI-assisted, not AI-generated.
Why AI Literacy Matters
AI now plays a growing role in how journalism is produced—transcription, research, data visualization, even content refinement. But with this integration comes the responsibility to bridge the understanding gap between newsrooms and the public. Transparency around how AI is used builds trust and makes our reporting stronger. Our goal is not only to disclose AI use, but to help demystify it.
Where We Use AI
We use AI in well-defined, limited ways—always under human oversight:
Art & Illustration: We use Midjourney to generate cover images, guided by Francesco Mereu, our external art advisor and professional graphic designer. All imagery is clearly generative and never aims to imitate reality.
Audio & Video Editing: Tools like Descript and Auphonic are used for transcripts, sound leveling, and streamlining post-production. Editorial sequencing, narrative flow, and content judgment remain human tasks.
Proofreading & Language Polishing: Platforms like ChatGPT Plus and Claude 4 Sonnet help refine grammar, consistency, and clarity—especially valuable in a multilingual, non-native English-speaking editorial environment. The original draft is always written by a human.
Editorial Support: AI tools are personalized to mirror our in-house tone and assist with tasks such as eliminating jargon, aligning with SEO best practices, and suggesting sharper titles or subtitles. However, story selection, framing, and final editing are handled exclusively by people.
Deep Research & Data Visualization: We occasionally use AI to help navigate data sets or suggest formats for charts and graphics. However, all research material is sourced, selected, and cross-checked by human reporters, with a preference for peer-reviewed studies and primary sources.
Our Code of Ethics (What We Don’t Do)
We do not use AI to generate full articles or publishable content from scratch. All text begins as human writing, shaped by human editorial intent.
We do not use AI to alter visual media. Photos, video, and audio remain untouched—no elements added, removed, or manipulated.
We do not create AI-generated imagery designed to mimic reality. All visuals are clearly creative and illustrative in nature.
We treat any AI-generated suggestion as unverified. Editorial judgment is applied by humans at every step of the process.
We always verify source material manually, especially when AI is used in research. If in doubt, we fall back on traditional methods—reading, cross-checking, and old-school digging.
Reference Materials & Best Practices
This policy draws on several respected journalism initiatives advocating for transparent AI use, including:
The AI Newsroom Toolkit (Poynter Institute)
JournalismAI by the London School of Economics, supported by the Google News Initiative
We also recommend the AP’s six-part video series (here and here) on AI literacy in journalism, which outlines tools, ethics, and best practices.