How to Train Your AI Dragon: A Social Scientist’s Guide to Getting Started

Introducing a free short email series for researchers who are curious about AI but not sure where to begin

I’ll be honest with you: I haven’t actually seen the movie, How to Train Your Dragon.

I know it involves a Viking boy named Hiccup who befriends a fearsome dragon called Toothless, and that the whole point is that the dragon isn’t the threat everyone assumed — it just needed someone willing to approach it differently.

That’s actually a pretty good description of my experience with AI over the past couple of years.

The Dragon in the Room
If you’re a social science researcher, you’ve probably been watching the AI conversation from a cautious distance. Perhaps you have good reasons. The discourse around AI tends to oscillate between utopian hype and existential panic, and neither is useful if you’re trying to figure out whether any of this actually helps you do your job better (or find a new one).

The academic community has its own version of the village that won’t go near the dragon. There are legitimate concerns about plagiarism, acknowledgments, reproducibility, the homogenization of ideas, what it means for graduate students, and data privacy. Those conversations matter, and we should keep having them. But in the meantime, a lot of researchers are finding that AI tools, used thoughtfully, are changing how they work. 

This short, free, 6-email series is for people who are somewhere in the middle: not early adopters evangelizing ChatGPT at every faculty meeting, and not committed skeptics either. Just researchers who are curious, a little time-poor, and wondering whether the learning curve is worth it.

What “Training” Actually Means
The movie title works on two levels that I find useful. There’s the obvious one: you’re learning to use a new tool. But the more interesting one is that you’re also training the AI, in the sense of learning how to communicate with it effectively. The quality of what you get out is almost entirely a function of what you put in: how you frame your request, what context you provide, what you ask it to do and not do.

Hiccup doesn’t tame Toothless by issuing commands. He learns the dragon’s nature — what it responds to, how to understand it, what it needs. There’s a mutuality to it. That’s a reasonable model for how to think about working with AI.

Some Things I’ve Actually Used It For
Here are a few examples from my daily use of AI tools. In my case, I have an academic subscription to a tool called Taskade, but the examples I’ll give should work with most of the LLMs out there – ChatGPT, Claude, Gemini, etc. 

Building and managing reference libraries. I work across multiple research areas in public health. Keeping track of relevant literature across years can be tedious. Lately I’ve been using an AI-assisted workspace to organize references by year, type, and topic, with links attached, in a structured table I can actually query. A body of literature on homophily analysis in HIV, for example, is something I would have previously maintained in EndNote. Now it’s a living database I can sort, filter, and add to in seconds. I can also send my lit search agent to find the latest relevant publications using Boolean searches on Google Scholar AND free text web searches, then import that info right into my table. This might sound mundane, but it has saved me hours and helped me find publications I might have missed before.

Grant preparation. Preparing a competitive funding application involves synthesizing a huge amount of institutional knowledge about what the funder values, how to structure arguments, and what reviewers are looking for. I’ve used AI to draft initial frameworks, test my logic, and identify gaps in my rationale. It doesn’t write the grant application, but it’s a remarkably good thinking partner at 3pm my time when my US collaborators are asleep.

Literature synthesis on complex topics. When I needed to get up to speed quickly on ethnic variation in Parkinson’s disease prevalence and treatment patterns across Australasia, I used AI to help map the landscape: what the key debates were, which populations were understudied, what methodological issues kept appearing across studies. I could then go to the primary literature with much better orientation. This is probably the use case I’d most recommend to social scientists: using AI as a first-pass orientation tool, not a final authority.

Drafting documents you’d rather not be drafting. I have used AI to help draft a formal appeals letter to an insurance company. I have used it to prepare a 90-day plan for a job interview. I have used it to draft emails I didn’t know how to start. None of this is glamorous, but all of it was genuinely useful. The AI didn’t know the full context of these situations, of course. I had to supply that. But it gave me a working draft that I could then shape into something authentic. Starting from something is much easier than starting from nothing.

Organizing messy lists into structured data. I recently had a list of researchers — names, titles, institutional affiliations — that I needed to cross-reference with institutional profile pages. Instead of spending an afternoon on Google, I described what I needed and worked iteratively with my AI tool to identify, retrieve, and populate the data into a structured table. Eighty-odd names, done in the time it would have taken me to do maybe fifteen manually.

What It’s Not Good For (Yet)
Since I’m trying to give you an honest account, rather than a sales pitch:

AI is not good at knowing what it doesn’t know. It will give you a confident-sounding answer about a niche topic in your field and be subtly wrong in ways that are hard to detect if you’re not already expert. This is particularly dangerous in literature work — always verify citations independently.

It is not a substitute for methodological expertise. It can help you explain your methods, but it shouldn’t be generating them.

It has a tendency toward a certain blandness. Under the hood, genAI is kind of averaging all the text it has ever seen. You have to actively push against it if you want to write with a genuine point of view. The best use of AI in writing is as a structural scaffold, not a finished product.

It doesn’t know your field the way you do. It knows a lot about a lot of things. Your job is to give it the background information and resources it needs, plus good prompts, to get the tool properly trained.

The Point of This Series
If you sign up for this series, you’ll get six emails, one per week. I’ll walk through information designed for social science researchers: 1) what AI actually is, 2) literature synthesis, 3) grant writing, 4) data organization, 5) writing and editing assistance, and 6) job hunting. Each one will be practical and concrete, with examples drawn from actual workflows.

I’m not here to convert you into an AI enthusiast. My aim is to give you enough working knowledge to make an informed decision about what, if anything, is worth your time.

Hiccup didn’t convince the whole village at once. He just showed them what was actually possible when you stop keeping your distance.

Note: I will not use your email address for anything other than sending these emails. I will never share or sell your information to anyone. You can unsubscribe from the series at any time.

← Back

Thank you for your response. ✨

Downloadable e-book coming soon! In the meantime, if you feel so moved, click the button below for a pay-what-you-want option.

Leave a Reply

Your email address will not be published. Required fields are marked *

You cannot copy content of this page