Objectica
Published HCI Research
Mixed Methods
AR/VR Mobile
EdTech
Situated Cognition in Educational Mobile Apps: Does Physical Situatedness Help Learning?
My Role: UX Researcher — study design, participant recruitment and onboarding, session facilitation, qualitative and quantitative data collection and analysis
Team: Embodied Learning Experience Lab, University of Florida Timeline: 2023–2024 Published: IEEE VR 2024 & IEEE ICALT 2025
The Problem
TLDR; Situated learning, real or not real?
Situated cognition theory: the idea that learning sticks better when it happens in a real, authentic context is widely cited in educational technology research. But surprisingly little research has been done on whether situated learning, as separated from applied learning, actually works.
This project aims at this gap. Specifically: if you build a science learning app that ties educational content to a person's everyday environment, does it actually help them learn?
The question mattered because AR and mobile learning apps frequently use situated learning as a design rationale. If the assumption was wrong, it would have real implications for how EdTech products are built.
What We Built
TLDR; Let's find out!
We designed Objectica, an Android app that teaches everyday STEM concepts through the objects around you. Point your camera at a kitchen knife, and the app explains friction. Look at a microwave and learn about wave frequencies.
To isolate the effect of physical situatedness, we built two versions:
Objectica-SCT: Used the phone camera and YOLO object detection to surface science facts about objects in the user's actual environment
Objectica-NonSCT: Delivered the same science facts, but randomly — no camera, no connection to the user's surroundings
Everything else was identical.


The Research
We ran two studies. The first was a preliminary between-subjects study with 43 participants to test our hypotheses and refine our measures. The second was an expanded week-long study with 56 participants, collecting richer data across motivation, engagement, and learning outcomes.
My contributions spanned both studies:
Participant recruitment, screening, and scheduling
Onboarding sessions via Zoom, including app installation walkthroughs and consent processes
IRB protocol compliance and management of participant records
Post-study questionnaire design using validated instruments (IMI, SUS, ARCS)
Quantitative analysis in SPSS (t-tests, Mann-Whitney U, Shapiro-Wilks normality testing)
What we measured:
Perceived relevance and motivation to learn (Intrinsic Motivation Inventory)
Engagement (app interaction logs: notification clicks, lessons expanded, time on page, repeat visits)
Learning outcomes (multiple choice quiz scores per object encountered)


What We Found
The results were nuanced — and honestly, more interesting for it.
What physical situatedness did do:
Users of the situated app reported significantly higher motivation to learn
They voluntarily revisited lesson content far more often (a large effect size of Cohen's d = 0.73) — a behavior not tied to any study task, meaning it reflected genuine interest
What it didn't do:
No significant difference in quiz scores — learning outcomes were comparable between groups
Most multi-item engagement measures showed no statistical difference
The insight: Situatedness boosted motivation and intrinsic engagement, but didn't translate to measurable knowledge gains in a one-week window. This raised a deeper question — what kind of context authenticity is needed to move the needle on actual learning?
Impact & Recommendations
The findings directly informed the direction of future research in the lab and contributed to a broader conversation about how EdTech teams should apply situatedness as a design principle — not as a guaranteed learning boost, but as a tool for motivation and sustained engagement.
Key design takeaways we surfaced for product teams:
Physical situatedness is a strong lever for re-engagement (repeat visits), not just first-contact motivation
Apps should consider surfacing objects of personal relevance rather than incidentally detected ones — a clear product direction for future iterations
Single-item measures can be valid complements to multi-item scales, especially for motivation
The work was funded by the National Science Foundation and published at two IEEE conferences.
Reflection
The most valuable thing I took from this project was learning how to sit with mixed results. In a product context, "no significant difference in learning outcomes" isn't a failure — it's a finding that redirects resources. Being able to communicate that clearly, and extract actionable design implications from inconclusive data, is a skill I use in every research project now.






