Tech in EdTech

Quality by Design in the AI Era

Magic EdTech Episode 78

Dr. Andrea Gregg, Associate Research Professor and Director of Learning Experience Design at Penn State University, outlines a practical approach to building strong online learning in the AI era. She explains why simple, “don’t make me think” design protects real learning, and why every choice should tie back to outcomes and assessment. She covers what makes micro-credentials credible, how to avoid personalization and dashboard features that mislead instructors, and where AI can help without replacing essential thinking. The episode turns solid learning principles into concrete decisions for higher-ed leaders, LXDs, faculty, and product teams.



01:32.71

Thomas Reindeau

Hello, everyone, and welcome to another episode of Tech in EdTech, the podcast where we explore the emerging technologies that are creating the future of learning. I'm your host, Tom Riendeau. On today's episode, I'm delighted to be speaking with Dr. Andrea Gregg. Dr. Gregg is currently an Associate Research Professor and Director of Learning Experience Design with Penn State's Mechanical Engineering Department. Her PhD is in Learning, Design, and Technology, and she is a co-author on the award-winning book: “High Impact Design for Online Courses, Blueprinting Quality, Digital Learning in Eight Practical Steps.” Our topic today is quality design in the AI era. We will discuss best practices to build rigorous online learning, ensure credible micro-credentials, and develop human-centered personalization. Dr. Gregg, welcome to Tech in EdTech. To start, could you tell us about yourself and your work at Penn State? What is your day-to-day role in learning experience design involve?


01:16.00

Andrea Gregg

Okay. Yeah, absolutely. And thank you, Tom, and for having me. I'm excited to talk to you and hopefully get to share some interesting conversation. I would say my day-to-day role, like most of us there is, varies depending on sort of where things are in process. I would say there's three kind of large buckets, for lack of a better word, that sort of occupy my time right now. The first is, I'm co-leading for the College of Engineering, our certified micro-credential infrastructure and offerings. And there, my role with that involves a lot of day-to-day sort of project leadership, kind of herding cats, so to speak, aligning faculty with topic areas, working with our external partners to make sure that we are developing micro-credentials that meet industry needs. And most importantly, making sure that the quality our external learners experience is best in class. These are typically four-week short-course offerings. They're online with a live, optional hybrid element where the learners get to meet with the faculty member, subject matter expert teaching the course. So it's very important that these kind of follow best design practices for online learning. So I would say that's the first bucket. The second bucket relates, and it's within mechanical engineering specifically, we have an online Masters of Mechanical Engineering, and I work with the faculty to make sure that those students, though they are invisible compared to the students sitting in front of them in the classroom, get the same level of educational quality. May look different, but the level of quality is important. Again, a lot of it's just collaboration and partnership and just conversations about how to structure the course, how to do certain things in teaching that work for the online learner. The third bucket, which is probably everyone's bucket right now, is how do we figure out how to integrate, and in some cases, not integrate AI into teaching and learning within engineering contexts.


03:51.33

Thomas Reindeau

Great. And what originally drew you into learning design and technology? How did that path lead to your current focus on rigorous human-centered online learning?


04:03.13

Andrea Gregg

So, like a lot of us in edtech or instructional design, broadly speaking, it was a bit of happenstance. I've always had an interdisciplinary background. My undergrad was math and history double major. I did a master's in the humanities. And then my PhD is solidly a social science with learning design and technology. I've had a two-year stint as a database programmer a long time ago, which is actually coming back to be a little bit helpful as I go further and further into the AI world. And I taught at Penn State four years as an instructor, but the majority of my career has been in online higher education. And, I think, being very interdisciplinary, having a technical background has been very helpful in that. I think my focus has come from, over the years, just working with hundreds of faculty, other instructional designers, interfacing directly and indirectly with thousands of students, and then working with a lot of ed tech products, whether multiple learning management systems, personalization systems, little things from how do you get a badge award and which system are we going to use for digital badges to which learning management system are we going to use. So, I've been involved in a lot of different areas. And I would say the rigorous human-centered comes because I think that's often what gets lost in the conversation. And I've done UX research where we've had students use actual course designs and observed where they get confused. And then I showed those videos to instructional designers who designed the course, and they're watching a student fumble to do something that they think is easy. And I find that kind of thing happens all the time, whether it's how we design a course, whether it's edtech products, that kind of the learner at the end, and in some cases, the faculty at the end, aren't focused on as much as they need to be to make the designed experience the best.


06:19.36

Thomas Reindeau

And that interdisciplinary focus is so important, I think, in learning design, being able to see education from different angles and different perspectives, I'm sure, gives you a unique perspective that can only help students and help the design process. Getting into the quality aspects. What are the must-haves in a quality online course? Things that preserve the rigor and student connection, even when the experience itself looks and feels different from face-to-face?


06:57.89

Andrea Gregg

I love this question in part because when I applied for my very first instructional design position years ago, I was asked this. And my answer has not changed. For an online course, the most important thing when a student logs in is that they know where they are and they know where they need to go. Meaning, when you walk into a physical classroom building, and then you walk into your room, you find your room because you know the number. You walk in the room, and there are chairs, and you know you need to sit in one of those chairs. Now, I'm excluding experimental-type learning environments. I am talking about 95% of learning environments. And what is so obvious and taken for granted in the physical environment is often underlooked in the online environment. Students need to know how to get around without overthinking it. And my philosophy is shamelessly borrowed from Steve Krug, the web usability expert, and his book titled Don't Make Me Think. And when you apply that to learning experiences, it’s  “don't make me think about the stuff that's not important.” Figuring out how to submit my homework is not important. The rigor and the intellectual challenge of doing my homework from a learning perspective is what is important. And I think because online learning merges web UX and then learning design, sometimes the UX doesn't get enough attention. So, like I said, I love this question because my answer is still the same. When I log in, as a learner, don't make me overthink or stress or waste limited cognitive energy on how to get around what I need to do, et cetera. I think in terms of the kind of second part of that question, rigor, I think it's crucial that, and this is where the design model that we have in our book, which is abbreviated as high doc, high impact design for online courses, is that the second crucial point is what are the learning outcomes? So, while you may do something different as an online learner versus a face-to-face, if it's the same course and it has the same learning outcomes, then as the instructor or the instructional designer, I should always come back to that. So, if you need to be able to calculate fluid rate in some situation, I may, in a face-to-face class, see you doing that in front of me. In an online class, how do I make sure you've achieved that same objective? And I think that's where the rigor happens. It is always going back to what is the purpose of this course.  When you leave, what should you be able to do? And how do I give you the resources and the instructional material to get there? And I think that's the same across modality.


10:01.51

Thomas Reindeau

Great. So let's apply this then to digital products. How should ed tech products map to a learning design blueprint so faculty don't have to fight the tool to reach that bar?


10:18.18

Andrea Gregg

This question I really like because it kind of forces me to think about how does our high doc model translate to other contexts. So, the book and the model were developed for faculty working without instructional designers to still be able to ultimately offer and teach a very high-quality online course. It's a design model rather than a development model. So the goal is to have a blueprint, a very well-aligned blueprint. So when I translate that or start to think about it in the context of developing an edtech product, I think that the steps apply quite well. So the first step in the model is learner analysis. And I think that this would be very parallel to user analysis when you sit down to design something. So who's going to be using this product? What are they likely to know or not know? Where are they likely to struggle? How can you make it relevant to them? And then, the second is learning outcomes, and here it's kind of the why of the product. So, what's its purpose? What is… if I'm a faculty member teaching with this, what am I meant to be able to do? And is that at the forefront? The other steps apply as well. I mean, the third step of our course design module is course structure. This obviously is different in edtech product, but it's sort of overall product interface, architecture, functionality. How do the users get around? It's never linear and often there's many paths. But I think all of those things play really well. I think for faculty, using an ed tech product, does it give me the information I need to teach better? Does it give me the information I need to know how my students are doing? Does it give me what I need to do to be able to grade efficiently? From a learner perspective, is it encouraging and supporting my learning, not inadvertently discouraging it by being too hard, by using jargon, by giving me messages that might make me feel like I'm not doing well and I should, you know, if I'm a low self-efficacy learner and I see a big red stoplight on a dashboard, that could have serious effects on my on my retention.


12:52.94

Thomas Reindeau

Absolutely. I want to shift gears in that same theme, but I want to shift gears to micro-credentials and what that means for employers. We're seeing more and more short courses being built, and this is a good thing. But many of those courses lack assessment credibility or workforce relevance. From your perspective, working and building micro-credentials for STEM pros, what should that look like? You know, for example, for a 10 to 20-hour microcredential, what's the minimal structure that preserves rigor without overengineering it?


13:38.02

Andrea Gregg

Yeah, I think that's a great question. And I completely agree. I think globally, this idea of alternative credentials, skills-based learning, is definitely kind of the direction things are going. I mean, we're not going to get rid of bachelor's and graduate degrees. They're always going to have their place. But with the shifting technologies and how quickly we all need to be kind of constantly upskilling, we're not going to like repeatedly get a new degree when we need to learn something. We're going to do something like a micro-credential. So I think that, again, there's a few, kind of, core things. One is who's your audience? Again, we can apply the same model. So, if this is something that you're offering and it's to teach an industry-in-demand skill, talk to industry, find out what they're looking for, find out where the gap is. It's really important that these things are created in conversation, if not in collaboration with the companies or organizations you're hoping to serve. And then your learners, what's their life situation going to be? So I speak to a lot of faculty who are teaching these, and what I'm constantly saying is, you should think of your learner as us, you or me. If we pay for professional development, what do we expect? We expect to be treated with respect. We expect it's well organized. We expect we're getting content. And we expect that, if it's promoted as skill-based, then we know what skills we're going to have and that we're evaluated on that. So the big difference between what might have been called before, like a workshop or a short course for professional development, is that micro-credentials bring that added element of assessment. And, one example is, if it's a communication course and one of the outcomes is that, you know, recipients of this micro-credential can now articulate a problem to their boss in a respectful manner. For example, we do a lot of manufacturing work, so we focus a lot on some basic email etiquette. I can't evaluate that with a multiple choice. I can evaluate part of it with multiple choice, but part of it is I need to see you do it. I need to see you write that email. So it's the alignment between the skill I say you're getting and how I know that you've gotten that skill. So, I think that's super important. I think all the other design principles of online learning, if this is an online experience, apply. It needs to be clear. It needs to have interaction throughout so that you're not just reading, reading, reading, or watching, watching, watching. You need to take in information, then do something with it. And your assessments need to align with your objectives; with the skills that that it's saying you're going to get.


16:52.86

Thomas Reindeau

Thank you for that. I want to take that a step further and put us in the perspective of the employer. What makes a micro-credential valuable to an employer? What are they, how can we communicate to the employer um so that the micro-credential reads as proof of a skill rather than just simply checking a box for course completion?


17:18.50

Andrea Gregg

Yeah, this is a great question. And this is something that I also, when we're kind of onboarding new faculty who are going to be the subject matter experts or teaching these, is that they're very different than teaching a credit course. So in a credit course, you have grades; you have some students, especially in areas like engineering, that may not pass the course. Often, if you give all A's, you'll actually be seeing something is wrong. So, these are meant to be competency-based. The idea is that ultimately, we want everyone to earn this credential. That doesn't mean everyone does, but it's not that we have to have 30% of the students under, you know, the bell curve. So, it's competency-based, and the part of the reason that these have taken off so much aligns with sort of technology infrastructure. Most micro-credentials now are reflected with a digital badge, and a digital badge is basically a clickable image that contains metadata and information that goes well beyond what a grade on a transcript would show. So, if I tell you I took, you know, thermodynamics, um, actually, that's a bad example because that's one that's pretty uniform across schools. If I told you I took Intro to American Studies and I got an A in it, what do you know?

If you hadn't taken that course or taught that course, you can make some guesses, but you don't really know what I did in the course, and you don't know what skills I gained from the course. With a digital badge, you click on the badge and it says, you know, interacted with this subject matter expert, was exposed to these topics, demonstrated competency in these ways, so it talks. The badge tells you so much more than a transcript does, than a diploma does. And I think that's the really important part is employers can now see an explicit articulation of skills at a micro level that just doesn't come through in a diploma or a transcript.


19:39.53

Thomas Reindeau

Thank you for that. Thinking about how AI is changing online design, one of the issues with AI is that it was not designed necessarily to find the correct answer, but rather to find the most probable answer.


19:57.50

Thomas Reindeau

So what's happening is we're seeing AI that can be very confident even when it's wrong. So institutions now have a challenge to decide what students must still do unaided.


20:10.96

Thomas Reindeau

So, in that context, what framework should faculty use to decide which learning outcomes remain human performed without AI and which outcomes can be AI assisted with appropriate oversight?


20:28.77

Andrea Gregg

Yeah, I mean, you're asking like capital T H E question right now. um And, you know, I probably have that conversation three times a week easily, sometimes multiple times a day with different faculty or different administrators.


20:44.80

Andrea Gregg

I think that I just read a study yesterday conducted by a number of faculty and researchers at MIT. It was a very well-designed, controlled study where they had three groups, and one group wrote a paper using only their brain, like no technological resources.


21:06.83

Andrea Gregg

One group wrote the paper with the assistance of an LLM, and one group wrote the paper but used like Google or another search engine. And they evaluated the outcomes in a bunch of different ways. One was an EEG. So they actually measured or looked at cognitive processes in the brain happening when they were writing the paper and kind of as a result of writing the paper.


21:32.54

Andrea Gregg

They interviewed the people and they analyzed the papers. And what they found, which perhaps unsurprisingly, is the students, or not students, but the participants who used the LLM to write the paper showed less cognitive activity.


21:50.16

Andrea Gregg

They had less retention. They had less ownership of the paper. And those who used brain only performed the best. It doesn't mean it was the best written, per se, but from a learning perspective, they did the most.


22:05.31

Andrea Gregg

And I think I'm forgetting what the title was; it was something about like the cost or it had a clever title. But, we have to know that using these tools, if you're doing something for the first time or you're doing the hard process of learning or struggling through a concept or struggling through a calculation,


22:32.30

Andrea Gregg

if you stair skip and you do it and you have the tool do it instead of you, your brain isn't developing in the same way. So I think that it's really; those studies are crucial.


22:44.02

Andrea Gregg

And that goes to; you made a great point which is a lot of times it confidently gives answers that are wrong. How do you know it's wrong? You know it's wrong because you learned that discipline or that domain.


22:56.49

Andrea Gregg

I mean, it's notoriously bad at basic math. I had it draw me a number line two days ago. And the number line went 0, 10, 10, 20, 40, 60.  -0, -20, -20, -20, -40. So, I know that because I learned how to count.


23:18.64

Andrea Gregg

Had I somehow been able to outsource that to AI, I would not have known that that number line was wildly incorrect. And this is two days ago. This wasn't like an earlier version of ChatGPT a year ago.


23:32.11

Andrea Gregg

So, I think that we have to keep students facing those cognitive challenges. I think where AI can be very helpful, and I use it myself in this way all the time when I'm learning something new, is have it explain it to me in a different way.


23:51.60

Andrea Gregg

Have it ask me questions. Have it break it down so that I can process it. I have the benefit, though, of high-level of multi cognition, high level of like awareness of my own learning process. So, I know if I'm kind of cheating, quote unquote. A lot of students don't know that yet. They're still developing metacognition. They're still kind of learning how to learn, learning how to monitor their own processes. So it's a challenging time to be sure.


24:20.27

Andrea Gregg

And I think that, especially in engineering, do you want the people designing your bridge to have used AI in the class where they learn about the fundamentals, about the physics of bridges?


24:35.56

Andrea Gregg

Probably not. Do you want them to get assistance in their learning in terms of learning it better and more deeply and on a more conceptual level because they had an alternative tutor that might be AI? Sure.


24:49.44

Andrea Gregg

So I think you're asking absolutely the question that anyone involved in education is struggling with right now.


24:58.58

Thomas Reindeau

And I'm sure that this will get better and an easier question to answer as the data that AI is pulling from gets better. This is at the end of the day a data question.


25:11.65

Thomas Reindeau

So with that context, what does genuine AI literacy look like in higher ed? And I'm trying to get beyond tool tips, but so we can get to what students can be in a position to critique outputs, understand limits, and avoid that kind of over-reliance.


25:31.78

Thomas Reindeau

How do we get there?


25:34.15

Andrea Gregg

So I think, and we are right now developing an AI fundamentals micro-credential that will probably be offered free or very low cost.


25:46.44

Andrea Gregg

And the ideas that we are targeting, not just students, but what every kind of citizen should know, and I mean citizen of the world, about AI. Anyone who's using it, or interacting with people using it, there's sort of this basic literacy that we all need.


26:02.67

Andrea Gregg

And I think it starts with, kind of, humans have since as early as humans have been creating tools, tried to get things that can operate like humans. So it's not new in that regard.


26:15.65

Andrea Gregg

I think we need to have a bit of a sense of that. I think the point you made about predictive versus knowing things is crucial. And I have found very smart people who don't know how it works.


26:31.00

Andrea Gregg

When I'll say, you know, it's terrible at addition. I mean, it's gotten better, because it's gotten more data that gives it answers to addition problems. But there was that thing for a while where it couldn't tell you the number of R's in the word strawberry.


26:47.99

Andrea Gregg

And if you open three different LLMs and asked each LLM how many R's are in the word strawberry, you'd get three answers. Again, now that that's been written about so much, it gets it right.


26:58.92

Andrea Gregg

But... so I think people need to understand how it's built and sort of this predictive modeling. It doesn't mean that they need to understand, or be able to write the algorithms or anything like that, but understand that it's not a book that's been peer reviewed or edited. It's not something in the library. It's not an academic paper. It's not a dictionary and it's not a calculator.


27:24.53

Andrea Gregg

It's exactly what you said. It's a prediction. Which is why it's so good at so many things in shocking ways and so bad at very simple things in shocking ways. So I think that's crucial.


27:38.25

Andrea Gregg

I think people need to understand what's happening with their data. Privacy issues, I think, not what you can and shouldn't upload or put in there.


27:52.61

Andrea Gregg

So I think in terms of genuine AI literacy, I would say a little bit about, kind of, human history and what changed to make this explode and the fact that this wasn't an overnight thing.


28:05.31

Andrea Gregg

This has been going on for decades of trying to create something like this. And then, sort of how it works with the data and prediction and what it means to have misinformation. It's the garbage in, garbage out.


28:22.19

Andrea Gregg

If you're feeding it a book full of errors, it's going to make predictions based on bad information. And then, how we can use it? I think the limitations, I think learners should know that it can short-circuit certain processes that you need to go through to build those mental muscles.


28:45.79

Andrea Gregg

And then, sort of how to use it practically speaking, I think, is the kind of easiest part to figure out. But I would say that. And I think we can do it well in four weeks. Of course, you could; there are entire degrees on AI. We've got a couple at Penn State. So four weeks is not going to cover that. But in terms that basic literacy, I think there are just a few really core things everyone should know if they're using it or interacting with people using it.


29:14.71

Thomas Reindeau

And we know that all students are using it, whether we've guided them or not. So, including that core instruction is becoming more and more essential. I want to talk about personalization. And again, with this idea on guardrails, We know we want to personalize wherever we can for students, that helps meet them where they are. But what guardrails should personalization systems meet before we trust them in a high-stakes course?


29:59.92

Andrea Gregg

We need to make sure the systems do what they say they're going to do. And we need to think about the unintended consequences. So, for example, years ago, before recommender systems were the norm, some futurists were talking about how cool it would be when you read a news article and then the system recommends other related articles. Or you listen to a song and the system gives you other songs in that genre.


30:30.52

Andrea Gregg

And that's great, but it has also created these political echo chambers we have across the world where people are only hearing what's personalized to them.


30:43.42

Andrea Gregg

And, I think that with education, we need to be very aware of that. So, for example, right now, what happens is a system tries to assess my knowledge state, which can't ever fully be done.


31:00.72

Andrea Gregg

For example, that AI study I talked about used three instruments, interviews, that work product, and brain scans to determine what was happening.


31:11.74

Andrea Gregg

So it's always multifaceted, what someone knows, how they know it, how they're thinking. So, it tries to assess my knowledge state. It might try to assess what I gravitate towards in learning preferences. So maybe I tend to watch videos versus reading or tend to read versus watching videos.


31:31.96

Andrea Gregg

It tries to assess what I already know and what I'm struggling with. So I think one thing is to realize that's always going to be an imperfect assessment because it's only got me interacting with the system, at least right now.


31:46.78

Andrea Gregg

I think low stakes testing is really important. Before it's used widely in higher stakes situations.


31:58.83

Andrea Gregg

um I think that.


32:04.09

Andrea Gregg

One thing that doesn't get enough attention, I mentioned this kind of in the kickoff, is the impact of the interfaces. So this isn't personalized learning in general. This is adaptive learning and sort of learning dashboards.


32:17.23

Andrea Gregg

But years ago, I'd been involved in a number of vendor conversations and working with different products. And people were surprised to learn that learners could be discouraged by a dashboard.


32:34.85

Andrea Gregg

And that they could see that red stoplight or the yellow light and start to doubt themselves and think, well, I'm not doing well, I guess I'm not good at this.


32:47.21

Andrea Gregg

And part of that is that there's not enough sort of educational psychologists working on these products or thinking through the ways that these might impact learners that we're not thinking about.


32:59.23

Andrea Gregg

So one example with a personalized system, or this more like analytics driven thing was, that someone had made, was it's just like when you get your utility bill


33:10.35

Andrea Gregg

and it shows you how you're doing. Like if you're using way more electricity than you did last year, it might show it in red or have an exclamation point or something to draw your attention to it.


33:23.26

Andrea Gregg

And they said that's what we're trying to do with learning. And the challenge of that is, I'm not personally attached to my electricity use. That communicates to me it's fairly neutral. Okay, I need to do better at turning lights off.


33:36.68

Andrea Gregg

With learning, I see an exclamation point and it's saying I'm not doing well. And I might doubt myself. So I think those things are really important. Like find out not just, kind of, what the quantitative analytics show, but people's experiences in these systems. Like, use that kind of multi-method evaluation.


34:01.88

Thomas Reindeau

Thank you. Let's delve into that a little bit more. So which learner data points actually would help a professor personalize an experience? And conversely, which are the metrics that might actually mislead the instructor if our goal is better learning decisions rather than just prettier dashboards?


34:27.36

Andrea Gregg

Yeah, so I have a kind of recent example in a dashboard. I was looking at it to see how learners had, kind of, their progress in a course.


34:42.16

Andrea Gregg

And the dashboard on first glance showed, let's just take LearnerX. That LearnerX had completed 80% of the course.


34:54.74

Andrea Gregg

So I'm like, great. I have to click on that to then see they have spent six seconds in the course.


35:07.73

Andrea Gregg

How in the world could they have completed 80% of the course and they've spent six seconds? Well, it's because the way that dashboard worked was if they clicked on a page, it showed it as completed.


35:23.34

Andrea Gregg

Well, these pages had videos and the videos are six minutes, eight minutes, four minutes. So it wasn't integrating the analytics from the playing of the video to the analytics of the clicking on the page.


35:35.83

Andrea Gregg

So to me,  it's kind of a dangerous dashboard. If a busy faculty member is looking at it, OK, 80 percent of my students have gone through 80 percent. Great. The class is on track.


35:49.73

Andrea Gregg

Don't make them have to click in to find out that more information that's actually much more revealing. Students aren't spending any time in here. So I think that's just a small example. And the dashboard, to your point, was very pretty.


36:03.88

Andrea Gregg

Like it looked good. It made me feel good. It made me feel like, oh, cool, like this is a successful course. And then I start clicking in and I'm like, no, there's a problem here. But I wouldn't have known if I hadn't clicked in.


36:16.75

Andrea Gregg

So I think that's really important. I think, of course, we always have to remember even playing a video doesn't mean you're watching it. It doesn't show you what you're doing with it. It doesn't show you if you're engaged with it, et cetera.


36:32.61

Andrea Gregg

So again, to my earlier point, we need mixed methods like H5P and these other interactives give us ways to have students interact with content, kind of, on a more regular basis.


36:46.41

Andrea Gregg

And I think that's really helpful, too. Like our students completing these little self-check quizzes, etc. So again, the more data, the better. And, just to put it in context, is really important.


36:58.99

Thomas Reindeau

And you make a really interesting point about instructors not having to do more work to find the right data. It ties back to what you were saying at the beginning of our conversation about the difference between how we all know what to do in a physical classroom, and yet the rules aren't quite as clear in an online classroom. And this is another example of that. Don't make it hard on the instructors any more than you would want to make it hard on the students, on what's not important.


37:34.04

Andrea Gregg

Yeah, and to that point, Tom,  I was involved in an adaptive kind of pilot years ago.


37:41.83

Andrea Gregg

And the instructors are the crucial piece. And we had instructors, like, just not use the system because it was too hard. So, they would tell us that. Like,  it was too, I had to click too many times. It did not help me. It made it harder to grade my course. Like, I'm not going to use it.


38:00.02

Andrea Gregg

So the students can't get any benefit out of it, especially in these systems that kind of involve instructor intervention. If the instructor finds the interface doesn't work with their workflow, or just is too complicated.


38:18.71

Thomas Reindeau

Exactly. I want to finish today by talking about one of the newer tools in edtech. How are you using XR? Are you using XR tools, augmented reality, virtual reality?


38:39.19

Thomas Reindeau

It's just emerging. It isn't mature everywhere, but at its best, it can materially aid spatial reasoning and mechanical design when it's used precisely. So my question is, what's a good decision rule for evaluating when XR adds real value in engineering education?


39:01.78

Thomas Reindeau

And what are the clues when 2D or physical art alternatives are just as good?


39:09.91

Andrea Gregg

I mean, my personal opinion is


39:17.27

Andrea Gregg

hands-on is always great, if you can do it. It's always good. Even if you're ultimately as an engineer going to be somewhat removed from the hands-on, it's still good to know sort of the experience of working with the product, the tools, the frontline, etc.


39:35.99

Andrea Gregg

But we also have challenges with that, which is class size. I worked in a mechanical design course, and the class size made it prohibitive for the faculty to bring in all the machine parts, you know, gears, shafts, pulleys, bearings, all of these things.


39:54.14

Andrea Gregg

The other thing is that when you're it if I'm holding a bearing and it's just me and these machine parts, I don't have all the other information.


40:06.02

Andrea Gregg

So XR allows me to have it in context. So I think it's an alternative if you can't get access to hands-on things. It augments hands-on things, which is the kind of AR part of it, to give additional context, et cetera.


40:28.95

Andrea Gregg

And it's also really good for things that, for safety and size reasons, are largely prohibitive. So for example, you could have a wind turbine that's 300 feet tall,


40:45.34

Andrea Gregg

and you're not going to have your class all climbing this wind turbine to look at the blades and all of that. You could do a VR experience with that just to give a sense of


41:00.91

Andrea Gregg

proportionality and just size and all of that. The other thing that VR especially is really good at is the motivation and the ability to gamify it, like make me a player in this, like trying to solve something. It adds some of those elements.


41:20.27

Andrea Gregg

The challenge with especially VR is just so darn expensive. And I don't just mean expensive price wise, though it is that. I mean, the programming, the testing, the headsets, all of that. And then the reality of dizziness.


41:38.70

Andrea Gregg

It's not a… it's a minority, but it's not one or two cases. So. I don't think there's like a rule of thumb. I think there's definitely places for it. And definitely when you can't have the hands-on or the hands-on really needs more context, I think it's great. I just think it's tough because of all the other things I will say.


42:08.06

Andrea Gregg

We did 360 video in a manufacturing plant, and that's included in some of our micro-credentials related to manufacturing.


42:19.36

Andrea Gregg

And you can still play it and, kind of, with the mouse drag around and see a lot. It's very cool


42:29.86

Andrea Gregg

and it's relatively low cost in terms of I don't need a headset. Yeah, it'd be great, better if I had a headset, but I don't need it to get much more than I would with just a picture or a standard video of that space.


42:47.21

Andrea Gregg

So I think there's kind of lower barrier ways that we can use these kind of technologies to really enhance the learning experience.


42:59.48

Thomas Reindeau

Great. And thank you for that. To end our conversation today, let's do a quick lightning round. So I'm going to shoot a couple of questions at you and see what you think.


43:12.34

Andrea Gregg

Okay.


43:12.44

Thomas Reindeau

So first, what is one quality myth about online courses you'd like to retire?


43:20.20

Andrea Gregg

Okay, that's a great question. I don't know if this is a myth, but it's a conflation. So, a lot of people have had a bad online learning experience, whether it was poorly designed, or like I said, they logged in and they didn't know where they were, or their teacher ghosted them, or any number of things.


43:41.93

Andrea Gregg

But that doesn't mean that online learning doesn't work. Just the same way that most of us have had a bad in-person learning experience where maybe the teacher didn't explain things well or the assessment didn't align with what the course was about or things were confusing.


44:00.88

Andrea Gregg

But, we typically don't then say you can't learn in a classroom. We say that was a bad instructor. This course wasn't well designed. So I think for some reason with online, we still tend to blame the modality rather than the course design and the instruction and all those other factors.


44:16.89

Andrea Gregg

So I guess I just wish that we could evaluate it more for what made it bad or what made it good, rather than kind of throw out the modality with one example.


44:30.13

Thomas Reindeau

Great. What is a micro-credential pitfall that you feel quietly kills employer trust?


44:40.24

Andrea Gregg

Not speaking their language. So I was just at an alternative credentialing conference, UPCEA Convergence, very good conference.


44:51.47

Andrea Gregg

And one of the panel discussions included folks from industry talking about this exact thing. And they said


45:02.72

Andrea Gregg

you put it in our terms. Like if you're speaking to someone about why this is important, put it in terms that make sense in my context. Don't over academicize it if it's not necessary. Because if I feel like you're speaking to me very theoretically or abstractly and it doesn't relate to, kind of, how it's going to make my job better or my team's job better,


45:26.97

Andrea Gregg

or make them better at their job, there's going to be that disconnect. So I think, again, it just goes back to that like co-creation, collaboration conversation between higher ed and the organizations that they're hoping to serve with these micro-credentials.


45:41.43

Thomas Reindeau

Yeah, thank you for that. And this, this one I'm going to ask as the parent of a college student. What is the single best AI assisted task that students should learn first?


45:57.24

Andrea Gregg

How to ask it a question and then figure out if it's accurate. So I think we are still at the stage where… I actually am working with a colleague who is having students solve a thermodynamics problem and then put that problem into AI and then evaluate what it got right and what it got wrong.


46:21.46

Andrea Gregg

And, I think, everyone should have to do something similar. Solve it first or find the correct answer first. Put it in AI and evaluate it. Don't do the reverse because then you go back to my earlier study where your brain isn't working it's not as much as it needs to.


46:37.84

Andrea Gregg

But the single most important task, I think, undergrad students should do is learn where it screws up or learn where its limitations are.


46:48.27

Andrea Gregg

Because without that, you can't critically evaluate it. It can't really be a kind of thought partner if you don't know how it thinks. So to me, that's really important.


47:02.30

Thomas Reindeau

And finally, what's one personalization promise you defer until the evidence improves.


47:10.48

Andrea Gregg

That we've solved it. That we have it figured out. I think we are all still working towards this, but we've not figured this out.


47:21.16

Andrea Gregg

Like it's in progress and there are some things that do very well. But, there are, like I said, unintended consequences. The echo chamber that, I think, we need to pay a lot of attention to. 


47:36.56

Andrea Gregg

I guess the other thing, I know you said one thing, but that one stakeholder can solve this. Like ed tech alone can't solve this. You need faculty, you need educational psychology, you need UX designers all together to figure out the best way to do this.


47:53.47

Andrea Gregg

So that's what I would kind of end with. Like as much as possible, we're just in a world where we have to be interdisciplinary and interperspective and different stakeholders to actually solve these very complicated problems.


48:10.15

Thomas Reindeau

Great. Thank you so much. To pull all of our conversation together, when higher ed administrators and course design experts are evaluating syllabi for approval, what is one criteria that they could add to their reviews that would protect learning fundamentals while welcoming AI.


48:32.54

Andrea Gregg

I think that faculty should have to explicitly address how and when students can and cannot use AI and why.


48:44.33

Andrea Gregg

So, I think there are very legitimate reasons for integrating it, and there are very legitimate reasons for prohibiting it. I think the problem we have right now is faculty aren't, and it's not their fault.


48:58.41

Andrea Gregg

They don't know what to do any more than any of us know what to do but aren't addressing it explicitly enough with the why. So students either feel like, well I'm not sure if I'm allowed to use it. Am I cheating?


49:11.77

Andrea Gregg

Or my faculty said, go crazy with it, but I don't know, kind of like, am I not learning? So that needs to be there explicitly. That's what I would add to that evaluation rubric is faculty are addressing the can and cannot and the why in their syllabus.


49:33.78

Thomas Reindeau

For ed tech builders, what's one product change that would instantly reduce faculty adoption friction in a good and rigorous online course?


49:48.56

Andrea Gregg

Don't overstate. Don't use a case study of one and then make a generalization as if it's a rigorous study with peer review. I think that's what turns off a lot of faculty, and I'm not saying this is great, but faculty are trained to be critical thinkers and they have an almost allergic reaction to what feels like a sales pitch.


50:14.49

Andrea Gregg

So I know the tech industry, because I know a lot of people working these jobs who have an education background or have gained an education background through doing this.


50:27.50

Andrea Gregg

Have those people at the conversations; have them writing some of the stuff. Like faculty want, if you if a product does something, faculty want to see the evidence.


50:38.48

Andrea Gregg

If you say a product's based on how people learn, give me the citations for that. This is how faculty are trained to think. So they are going to be more open to a product and more willing to experiment with it or include it.


50:53.18

Andrea Gregg

If the company has done the homework to put it in their terms, which is with research and statements that are backed by theory that's established, et cetera. So I think it's that conversation part that can make a big difference.


51:11.09

Thomas Reindeau

Dr. Gregg, thank you so much for your time and this deep conversation today. This is incredibly valuable. I want to offer you an opportunity for any final thoughts or parting advice that you'd like to give.


51:26.02

Andrea Gregg

I mean, thank you. I always enjoy getting to kind of talk about all the things at once. So this is always, these kind of things are a lot of fun for me.


51:37.41

Andrea Gregg

I think my advice to all of us in higher ed, in industry, as learners, as teachers, is we're just at a state where everything is


51:49.93

Andrea Gregg

way more complicated. Which means we just have to have more collaboration and conversations. And, actually, that would be my advice politically as well. We have to get out of these echo chambers and just start talking across these divisions, like vendor versus faculty. It really has to be us in it together to figure out these things because a single faculty member expert alone cannot solve this.


52:24.25

Andrea Gregg

A single ed tech company alone cannot solve this. So, I think it really is just, to quote a book that actually was about politics, just cross talking across the divide, talking across these sort of artificial differences to find out we want to help student learning, how do we do that best?


52:43.13

Thomas Reindeau

Dr. Gregg, thank you for your time today. This has been Tech in EdTech.


52:49.18

Andrea Gregg

Thank you, Tom. This was a lot of fun.