Online Learning in the Second Half

John Nash & Jason Johnston

In this podcast, John Nash and Jason Johnston take public their two-year long conversation about online education and their aspirations for its future. They acknowledge that while some online learning has been great, there is still a lot of room for improvement. While technology and innovation will be a topic of discussion, the conversation will focus on how to get online learning to the next stage, the second half of life. read less
EducationEducation

Episodes

EP 30 - Dr. Omid Fotuhi and the Sense of Belonging in Online Learning
Sep 9 2024
EP 30 - Dr. Omid Fotuhi and the Sense of Belonging in Online Learning
In this episode, John and Jason talk with Dr. Omid Fotuhi, a research associate at the University of Pittsburgh and the Director of Learning Innovation at WGU Labs, about the notion of belonging in the evolving landscape of online learning. They discuss the WGU model and how it breaks traditional barriers through competency-based, self-paced education, the critical role of fostering a sense of belonging for student success, the need for institutions to move beyond temporary interventions to address deeper structural issues, and the future of education where learning becomes more independent. See complete notes and transcripts at www.onlinelearningpodcast.com   Join Our LinkedIn Group - Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)   Links and Resources: Inscribe - Community-based educational software application"Where and with whom does a brief social-belonging intervention promote progress in college?” Article https://www.science.org/doi/10.1126/science.ade4420 Dr. Omid Fotuhi Contact Information LinkedIn: https://www.linkedin.com/in/omidfotuhi/   Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License. Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!       [00:00:00] Omid Fotuhi: The notion and the assumption that learning happens best, as measured by seat time, the number of hours you spend.     [00:00:07] Omid Fotuhi: Ha.   [00:00:08] John Nash: So   [00:00:09] Jason Johnston: rookie mistake, John. Come on. We haven't quite been at this a year yet, Omid. so…   [00:00:15] John Nash: My phone is off, but my Macintosh rang   [00:00:18] Omid Fotuhi: Yeah. Okay. Yeah.     [00:00:21] John Nash: I'm John Nash here with Jason Johnston.   [00:00:25] Jason Johnston: Hey, John. Hey everyone. And this is online learning in the second half, the online learning podcast.   [00:00:31] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning has had its chance to be great and some of it is, but still, a lot of it really isn't. And so Jason, how are we going to get to the next stage?   [00:00:47] Jason Johnston: That's a great question, John. How about we do a podcast and talk about it?   [00:00:51] John Nash: I think that's a great idea. What do you want to talk about today?   [00:00:55] Jason Johnston: Today we are joined by Dr. Omid Fatouhi. Omid, welcome to the podcast.   [00:01:01] Omid Fotuhi: Thank you. It's great to be here.   [00:01:03] Jason Johnston: Can we call you Omid?   [00:01:05] Omid Fotuhi: That sounds great.   [00:01:06] Jason Johnston: Okay. Omid is a research associate at the University of Pittsburgh and director of learning innovation at WGU labs. So great to have you here to talk with us today.   [00:01:17] Omid Fotuhi: I look forward to it.   [00:01:19] Jason Johnston: You and I, we met over dinner through the company Inscribe at a conference. And one of the things that, of course, immediately, just made me realize that you were just a great guy is our common love of Canada We talked about living in Canada and talked a little bit about longing to live in Canada again.   And so I appreciated that. And then we connected, of course, over the topic of online and the panel that this company Inscribe, which I can put a link in, great people, cool product. Not paid by them. But I'll put a link to our show notes. But they connected us over this idea of belonging, student belonging online, which is a huge topic.   And we'll get into that because you've done some research. in this area. But first, we wanted to get to know you a little bit and just to chat about that. Tell us a little bit about your current roles and where you are living right now.   [00:02:17] Omid Fotuhi: Yeah, I think the best way to describe my current role is as a fish trying to climb a tree. If you've heard the expression that you shouldn't judge a fish by its ability to climb a tree, nonetheless, that's what I am. It's akin to what's also known as the Peter Principle, which is to say that if you're trying skilled and competent, you'll eventually be promoted into incompetence, often into management.   And that's not too far from the truth with where I am, except that I've been able to create a pretty unique situation for myself. So I am a trained social psychologist by training. That's where a lot of my thinking and a lot of the way that I look at things comes from. And currently, I'm working for WG Labs which is a R& D arm of Western Governors University, which does focus on how it is that we can create the technological tools and the research base to understand how to optimize learning for students, both in traditional but also online student populations. So that's what I'm doing right now. And the great thing is that throughout my position with WGU labs, I've still been able to engage in conversations like this And invest in ongoing research on the topic of belonging and our conversations with inscribers, just an example of that.   [00:03:35] Jason Johnston: Yeah. And for those listening, you may or may not know WGU huge university, interesting backstory, some interesting Even in the news the last few years in terms of its funding from the government and the back and forth on that, which sparked a huge conversation about regular and substantive interaction.   And anyway, we could go into so many directions with one of the unique things I think about WGU is that it's competency-based. If I understand this, basically, every course that they put out is more competency-based. Talk to us a little bit about that. And like, how do you intersect with that kind of way to deliver online content?   [00:04:19] Omid Fotuhi: mean, I think what I'll mention is the fact that WGU offers an alternative to the traditional design of education. And it's one in which the WGU is able to challenge the prescriptive norms and standards of how it is that learning and assessment take place. And back in 1995, they said, hey, let's do this crazy thing of putting learning online and see what happens.   Fast forward to today, with over 150,000 currently enrolled and over 300, 000 graduates, there is something to that recipe that seems to be successful, that resonates and offers a value proposition to individuals who may not have seen themselves as being viable into the pathway of the traditional online or the traditional higher educational opportunities that many other students would themselves into. Now, when you look at some of the components of WGU, it is a competency-based, fully online, and self-paced learning model, which means that it challenges some of the common barriers to accessing higher education. Those include things like a model of learning that challenges the standard assumptions of what learning ought to be, one of which is that this moderated learning, which is measured by seat time, the number of hours a student spends in the classroom, is the primary metric of how it is that learning should be captured.   And instead, it offers some freedom to some of those constraints. Specifically, it challenges the time-paced, place-based, and standardized testing approach to learning by having this online where you can learn at your own pace, it is competency-based, which importantly is able to capture learning in a way that's much more dynamic.   It allows the inclusion of experiences and learning that you may have acquired in other domains so that testing is a better reflection of the learning in itself as such, as I mentioned, with over 300, 000 graduates and over 150,000 currently enrolled, many of whom are seen as the non-traditional student populations it, it's a strong testament that this model, which is an alternative to the traditional higher educational model, seems to be resonating and working for many students.   [00:06:50] Jason Johnston: Could I ask one more question about WGU? Are you so far down the road now that like you're not even talking about Carnegie hours or about time in your seat or about those kinds of things or how it works there?   [00:07:02] Omid Fotuhi: What I'll say is it's important to unpack what we mean when we talk about students. For me, what comes to mind is a recognition that students are not a monolith group, that they are comprised of many diverse individuals with diverse characteristics diverse needs, and diverse preferences for learning. And if you take that insight and combine it with the understanding that we've all been exposed to recently, given the disruptions of the pandemic, given the advent of AI, given some of the increasing Awareness of the conditions of the more traditional higher ed institutions with their legacy admissions and other admission criteria that, that do selectively benefit some groups over others, but there is this, appetite in this atmosphere of exploring alternative models.   And so I think having schools like WGU that have an alternative model which appeals to a group of individuals who again, in the traditional view would not have seen themselves as being part of the educational process now becomes a reality. And I think As we're at this precipice of the, at this nexus of technology having a greater and greater role on how it is that we take, think about learning that more and more of these alternative models will have value for different subgroups of individuals.   So I think that's the way to think of it. And I also would maybe mention that being on the inside, WGU is also recognizing that it too needs to change and it too needs to adapt very quickly because the model that's worked for 25 years is not going to continue to work unless we want to fall, sort of categorize ourselves in the same way that the traditional higher ed institutions have had, which is to continue a legacy of traditions simply because that's what we started off with.   So that's what I'll say. I think it's an interesting time and I think what works today may not be relevant for tomorrow, but the ability and the willingness to adapt is really what's necessary given that there will be more and more inclusion of diverse groups. into the educational pathways.   [00:09:20] John Nash: That's really good. And it reminds me of the article that you co-authored last year entitled, "Where and With Whom Does a Brief Social Belonging Intervention Promote Progress in College?" This was published in Science over 8,000 downloads in less than a year. Maybe this struck a nerve with folks.   [00:09:53] Omid Fotuhi: I think everyone knows that education is important, and everyone's got a critical eye around what it is, that can optimize the learning experience.   Now, maybe take it 50 years ago, there was this observation that individuals who had high self-esteem also had correlations with better life outcomes, like better success, better academic performance, and better happiness in their relationships. And so there was this movement, the self-esteem movement, that actually encouraged people to now tell students and children that you're great, you're wonderful, and you can do anything.   Turns out that did not work out so well. Because telling someone that you can do anything without the training and the work that has to go into being able to do that might fall short. A remedy to that was what then came on the scene as known as the growth mindset insight. This is a recognition that how you view intelligence has a pretty powerful role in how it is that you stay engaged with difficult things and how it is that you respond to failure and setbacks.   It too had its moment in the limelight, if you will. And unfortunately, it also suffered and struggled from what I would call is an overuse or a sort of, It's superficial application of mindset. Today, it's so ubiquitous in education that the most common rendering of a growth mindset lesson is a teacher saying, "Hey, we know from research that having a growth mindset is good, so you should have one."   Turns out it's not the best way, and the reason why is because now the onus is on the student to demonstrate that they have a growth mindset instead of the investment necessary to help them cultivate the appreciation for the effort. So as that growth mindset is starting to see a bit of a stall in terms of its poignancy.   There's now also a recognition that a similar correlation exists between a sense of belonging and optimal outcomes. And so you can see history starting to repeat itself. What's happening now is that because this observation is powerful and because it's compelling, relatively low cost, that schools and educators are saying, hey, what if we just foster belonging?   What if that's what we do and that will solve all of our problems? So as you're probably seeing and hearing, I'm sharing this with a bit of a caution, because to do any of these this work effectively, you have to be really committed to understanding the mechanisms that threaten a sense of growth mindset, a sense of genuine belonging, instead of unfortunately falling prey to the convenient articulation of the outcome, which is to say, belonging is good, so you should feel like you belong, which by the way, is not even an exaggeration.   I have seen schools who have paid for full size billboards with the word belong exclamation mark as you're driving into campus. So the science article that was published, which should be given credit to dozens of people, including the four primary researchers, Greg Walton David Yeager Mary Murphy, and Christine Lowell, and myself, who co-founded the project, was an attempt to see how and where you can try to scale which is really, again, at the heart of this tension.   When you see something that works, how do you scale it effectively? And so that science article that really, I think, is absolutely titled, "When For Whom Does Belonging Work?" is really the main insight and the main takeaway is that not that belonging interventions will work in any and every situation, but understanding the core requisites of when belonging fails, or what conditions threaten a sense of belonging, will then give you the pathway and the opportunity to try to explore what are those triggers that cause this belonging uncertainty, and then targeting those things.   And maybe having a belonging intervention is part of your repertoire, but it shouldn't be seen as this magic bullet that will solve all of your problems. And that's the framework I think that's important to as we're talking about these interventions.   [00:14:13] John Nash: I really appreciate your lead up to this, because what it helps us remember is that, and as it's structured in the article that the social belonging is the intervention that then hopefully leads to the real outcome, which is students completing the first year increasing that rate at which they complete versus what you were just saying as belonging with an exclamation point, as though we're being ordered to belong and then I can wash my hands of this and we're done.   [00:14:44] Omid Fotuhi: Yeah. And the other note, and again this is a, might seem like a bit of a tangent and might even seem like I'm working against myself because as I mentioned, I spent about 15 years of my life thinking and investing in understanding how these interventions work. Now, what I'll also say is I would suggest that the optimal endpoint for any psychological intervention is that it no longer works.   That might seem surprising, but if you think of it, any intervention, the word intervention means to intervene to stop something from not harming is actually a band aid solution that is intended to mend any underlying root causes temporarily. And so as you think about our interventions, like a growth mindset, like a belonging intervention, our hope is that they stop working because in the process of understanding when belonging becomes relevant and when it no longer works, opens up the conversation to understanding what are the contingencies in the context that are causing belonging to be threatened and causing people to feel uncertain about the belonging.   So again, you shouldn't rely on these interventions as the solution. You should understand that these interventions will help begin an inquiry into the conditions by which these interventions are needed with the hope that you get to a point where you don't need them anymore because you've solved the underlying causes.   [00:16:18] John Nash: You've signaled, in essence, how we can always be creating a belonging environment.   [00:16:26] Omid Fotuhi: Yeah, and and what's interesting and I think what you all are really focused on with your podcast and your work is having a greater understanding of online learners. I think when you take this theoretical framework of belonging, first it's important to ask what is it? And that, depending on who you ask, you're going to get a different question.   But overall, I think most people will agree that a sense of belonging is a feeling that you are cared for and valued in a particular context. Generally, I've found that this is the one definition that most people can resonate with.   [00:17:01] Jason Johnston: Could you say that one more time for us?   [00:17:03] Omid Fotuhi: Belonging is the perception or the feeling, and I will underscore and bold feeling that you are cared for and valued in a particular context.   Now the reason why I underscore and bold feeling is that it is entirely subjective, which means that I can't give you a checklist of things that as an administrator to do that will ensure that you will create a sense of belonging in all your students. It also, I think, a broader level, highlights the fact that because it's such an individual experience, you also have to understand that context matters.   And much, if not most, of the theoretical foundations of belonging come from studying students in more on campus traditional universities. It's a really great theoretical question now of what is belonging for a student who's learning online? What are the touch points or the connections or the links that are associated with a sense of belonging?   And here's an even more ambitious question. And one that I think I'll leave you with is does belonging even need to happen for learning to be effective? And so that's a really, I think a first principles question for us to think about. Must there be belonging? And if you unpack that for a second, and this is my own little thesis on belonging, which is to say that we have created our society and our organizations to necessarily have these contingencies, these identity contingencies, which is a term that Claude Steele uses, where individuals have to navigate the norms, whether implicit or explicit, and feel as though they can either live up to those norms or whether they are excluded from being included in those norms. So if you look around and you're underrepresented, then you might start to wonder, maybe I'm not part of this group. If you look around and the way that you dress, the way that you carry your hair and your appearance is different.   You start to question if you are performing poorly compared to your peers, you start to question these things. And all of these are contingencies. that make you question whether you do or don't belong. And I think a really interesting opportunity for us is could there be a model in which there is a learning environment in which there aren't as many of these contingencies, in which learning can happen independent of your sense that you are adequate, sufficient, worthy.   That's the next frontier. And I think that's what the incredible promise of online learning carries is that we could potentially envision a world in which we don't need to invest so much in trying to foster a sense of belonging because a sense of belonging comes from your social network at home, your own sense of individual growth and progress, your own self awareness, and you're able to invest in your learning in a way in which your identity is not contingent on how you do or whether you were included within the in group or that culture that is the institution. That's where I would hope to see the future of learning happening, and that's where I think the promise of online learning is one step ahead of more traditional institutions.   [00:20:29] John Nash: I'm interested because I've been either guilty of oversimplifying belonging, or maybe I'm in support of your thesis. because people, and myself included, have talked about Maslow's notions of belongingness, as a sort of this love need, second only to physiological and safety security needs.   When you ask, is belonging necessary for learning, are you thinking about it only as an aspect of the learning cycle for the learner? Or is, because if I feel belonging in general in other places, then have I satisfied that need?   [00:21:04] Omid Fotuhi: Yeah, that's a great question, and it's actually incredibly critical to understand that framework of needs and optimal functioning. Like any basic need, imagine if you're hungry, right? If you're really hungry, then you and I can't have this conversation, because you're focused on your hunger, you're distracted, you're depleted.   And that's exactly the way to think about these needs, is that it's only when they are absent or frustrated that their predictive effects emerge. It's more of the absence of these needs that becomes critical and important, as opposed to their presence. And I think all too often, we've been a part of a culture where we're like, we just need to, nourish this and have more of it.   And maybe that's a good thing. But that's a distinct question from its absence. It's if you go to a party, If you don't know anybody, you're not going to stay there. But if you know that one person, then that's all you need. You just need that one person that will introduce you and you feel like, alright, I have someone to talk to.   But if you have nobody, that's when it matters, and that's when you're not going to be able to focus, have conversations, even step into that party. And the same is true for belonging. The same is true for psychological safety. The same is true for physiological safety. that these needs only matter when they are threatened.   And again, I think this is where I go back to the conversation we're having is, why is it that we've created Institutions that, that question a sense of belonging, and rather than accepting those as, things that students have to learn how to navigate, maybe the question is, how do we redesign the institution in such a way that it doesn't threaten your sense of belonging?   How do we do that?   [00:22:56] John Nash: How might we have institutions that ensure there's no absence of these needs?   [00:23:02] Omid Fotuhi: Exactly. How do we, yeah, exactly.   [00:23:05] John Nash: Can I take your party analogy to an online learning class?   [00:23:08] Omid Fotuhi: Sure.   [00:23:11] John Nash: Yeah Yeah good. That's all I wanted to know. It was a yes or no question.   [00:23:16] Jason Johnston: I think you could start in something like, "Say you're the DJ of this party."   [00:23:21] Omid Fotuhi: There's being I love psychology for many reasons. I actually began in psychology for what a lot of graduate students or early researchers use as the reason for doing research, which is more me search than research, right? Psychology gave me a pathway to understand myself.   And through that, I was able to really better navigate how I'm feeling, how I'm thinking, how I'm behaving, and a better understanding of the world. There's a wealth of understanding and frameworks within psychology that help us understand a lot of complex issues. One of the foundational theories of human behavior and motivation is called self determination theory.   And essentially, that theory is, as far as I can tell, one of the best comprehensive models of why it is that we invest effort in a voluntary way. And there are three components of why it is that we would do this. One is that we have a sense of competence by doing something new and hard, but that the acquisition and the mastery of that skill helps to reinforce our self view as being capable and able to do something.   The second is that we have a sense of autonomy, independence, and choice in what we're doing and why we're doing it. If we strip that away, then all you have is conformity, and that's not conducive to optimal learning or optimal performance. And the third is relatedness. We are social beings at the end of the day, and it's hard to undo that hard wiring.   And so this is the one that I want to just maybe unpack for a second, because I think. It's one of those unspoken tensions, right? There's a prospect in which you can imagine online learning or maybe even AI driven learning where it's entirely independent and individual. You just imagine the world in which you don't need teachers, you don't need classes, you can just learn on your own.   And a lot of the critics will say does that mean that's the beginning of the end of society, that we just don't need each other anymore? I will posit that based on the foundations of psychology that's not likely to happen because at the end of the day we will also only invest in the acquisition of learning if it helps us better relate to other people.   That ultimately we're gaining this learning to exchange with others in a way that it's beneficial. Maybe I want to get your thoughts and you want to get my thoughts and collectively we create new thoughts together. Maybe it's part of a commercial agreement that I am employed because of the skill set that I have, but it's still related to this notion and this need of relatedness.   And one of the pushbacks that I have around this you know, fear mongering that if we just pursue online technology driven learning that we're going to get to a place where everybody's entirely independent and society will fall apart. I think there's some, boundaries to that notion. And honestly, I don't see that happening because of these fundamental needs that we have in the fact that we do care about these exchanges with other people really critically.   [00:26:22] Jason Johnston: One thing in addition to that as well, we've been doing asynchronous, independent learning since humans were around, really, and certainly since Gutenberg, right? This is all of us have learned asynchronously, independently from books. And I think this is always going to be part of what we do.   I think that there is a, maybe some layers to this as well, that we'll find ourselves in various domains of learning. Some of them will be more social, some of them will be less social. But I love what you're saying and I love what you bring to this as well. And I think I failed to mention, or we failed to mention before that you have a PhD in social psychology and I love what you bring to this, not just, I know a PhD is not the end all of your learning that you've learned a lot since I'm sure.   But it, I love that you bring that perspective to online learning that you're looking at it, not just from a education standpoint. Mine's more, my learning is much more from an educational end of things, but it falls in line with a lot of what we're learning as well about andragogy, these things fall in line, or even some of our more recent talks about liberatory practices inside of the classroom, thinking about the students.   Agency and what it is that will allow them to pursue their own their own learning and guide the knowledge that they're acquiring.   [00:27:50] Omid Fotuhi: Yeah, these are the foundation blocks of understanding motivation and learning. And so I love that you're thinking about all of this. I will mention that given the current popularity of belonging, which I think is worth noting, that almost every institution will have some component of belonging or equity within their vision statement or their mandate.   We are at an interesting juncture where online institutions are also interested around how do we foster and create the conditions and the interventions that are able to create the sense of belonging. And so I've been pulled in, my team has been pulled into this question a lot, and we're starting to do the foundational research of doing exactly that, is to identify the triggers and the conditions by which belonging is put into question.   Because once you understand those levers, then you can start to create a program that targets those levers and having access to WGU and their student populations, we have had an incredibly accelerated rate of learning already but there's still a lot to learn, right? We mentioned earlier, what is the nature of belonging?   What is belonging really if the contingencies are removed or minimized like it is for WGU? In which case, what is the utility of belonging, if any? So these are the questions that we're wrestling with and gaining a lot of insights. And it's great to see that there are a lot of institutions who are coming to the table with these kinds of questions.   We've had partnerships with ASU, with SNHU, trying to tap into these same questions. But I imagine there's still a lot of organizations that are grappling with these same issues. And I love that you all are doing this work too.   [00:29:28] Jason Johnston: Yeah, and you mentioned that big billboard that was great, that said belonging, exclamation mark. It made me think of a research colleague at University of Kentucky Dr. Lanisha Connor, who I learned a lot from, and she said one time, "you can't declare a safe space. You just can't, just by saying the first day of your classroom, this is a safe space." It's and it also reminds me of the office, where, one day Michael Scott declares bankruptcy, and so he just steps out into the office and says, "I declare bankruptcy."   It's " Michael, that's not the way it works." And you can't declare a safe space. I like what you said about thinking about the conditions and interventions. Could you speak to each of those a little bit more, either what you learned from this study or from your own learning there at WGU?   [00:30:20] Omid Fotuhi: Yeah. And again, if you'll indulge me, I'll go on a little bit of a historical review. Much of this work is founded on some of those seminal and pioneering work of Claude Steele. Back in, I think, the 80s and even some into the 90s there was an observation known as underperformance. And specifically what that was is, There is an observation that as students begin a new phase of their learning, so they transition either from high school to college, for instance, or some transitional period, that given that those students from wherever they came from had almost identical credentials and grades, and yet they started in this new environment and consistently and predictably along the way.   Some of the underrepresented demographic variables that we know would now demonstrate poor performance compared to their peers. Now again, I want to emphasize that based on their past metrics, this shouldn't have happened. And yet there's something that's happening that as they transition to this environment is leading to this underperformance.   And this began the question of what are the forces? What are the factors that are leading to this underperformance? Because based on past. performance, we wouldn't predict this. And so Claude Steele and some of his colleagues, Joshua Aronson in particular designed an experiment in which they invited men and women into the lab.   And they told these these participants that they would be doing a relatively hard Math test, which was pulled out of the GRE and the participants, both men and women, were randomly assigned to one of two conditions. In the one condition, they were just told, you're going to do a math test, it's pulled out of the GRE, so go ahead and do your best.   And again, even though these participants were selected because they had identical scores in college, and they had identical levels of interest in mathematics, When they were brought into the testing environment and told to do this test, women underperformed compared to men, replicating that underperformance effect.   Now, in the other condition the participants were told you're going to do this math test, but in addition they were told, although we know that in standardized testing sometimes women underperform compared to men, but guess what? Our team has devised this test in such a way that this does not happen with this particular test.   And so given that same test this time the difference between the genders did not appear. The men and women performed exactly the same way. So this was one of the first examples and demonstrations that there are forces, invisible forces within the context that individuals contend with, that lead to performance differences.   So these term, again, identity contingencies, the conditions in a situation that one must navigate based on one's social identity. can have a pretty powerful effect. In particular, one of those identity contingencies is known as stereotype threats. That is, if you perceive that there is a negative stereotype about you or your group that you become worried or at risk of confirming, then that places an additional level of tax and cognitive burden that you have to contend with in addition to the task at hand.   While you're sitting down and doing this test, you might start to hear your internal thoughts going, hey, this is a test, you're here with your peers, it's a math test, and maybe women are not supposed to do as well. And as you're starting to start focused on the test, you hear this internal chatter, and maybe you might even Retort and say, Hey, stop thinking about this.   It doesn't matter what these notions are. It doesn't matter that John next to you might judge you negatively. You're still engaging in that chatter. And your emotional system is also activated. You're also anxious. And you're also more vigilant to see if people are going to look at you if you do more poorly.   All these things are robbing you of the cognitive resources necessary to do the task ahead. And of course, that becomes the mechanism by which you underperform, and not a reflection of the fact that you are not as skilled or as prepared to do the test. These are what are known as identity contingencies, or the conditions in an environment that predictably impacts certain groups in predictable ways.   And so that's important to note, that there are these conditional factors that systematically impact individuals in different ways. And I forgot the second part of your question. You said there was a conditions and the intervention, is   [00:35:07] Jason Johnston: And then the interventions, yeah.   [00:35:09] Omid Fotuhi: In the example by Claude Steele and Joshua Aronson, they also learned that if they reframe the meaning of the situation, that they lighten the weight of the identity contingencies, the conditions of the environment, then that can free you up.   That can free up the cognitive resources that otherwise would have been available to you. So if you calm the anxiety, if you calm the worry in your mind, then you can perform better. Now I've been doing this work in psychology with a specialization on mindsets, motivation, and performance for over a decade.   And over the years I actually get a question a lot that, that where people and usually students will say, "Hey, knowing what you know and understanding the research like you do, what is the optimal psychological state of learning? Is it one of an intense focus? Is it one of being in flow? One of deep curiosity?"   And I'll respond that based on my understanding and reading of the literature is that the optimal psychological state of learning is actually one of simply being okay. One of just having your cognitive resources and your thinking be calm so that you can engage in processed learning in an optimal way.   So you can be critical about the information that does make sense or it doesn't make sense. You're not bogged down by all of this chatter in your mind about what other people might be thinking. That is the optimal condition of learning. And so as we think about the conditions that tax your cognitive load, that's where we focus on.   Now, as it relates to interventions, The process of identifying when these contingencies have a negative effect is also a pretty robust process. So my colleagues and I realized that although the theory is sound, one theoretical framework may not be relevant for all groups in all conditions, which is to say that any intervention won't necessarily be effective for any group in any condition. And so there's a lot of customization and tailoring that has to happen. Like my colleagues, Jeff Cohen and Julio Garcia, who's passed away unfortunately did did articulate a framework called the three T's framework that an intervention needs to be tailored timed and and timely.   Which is to say that you have to understand who it is that you're serving. when, and for what underlying cause. And that's why these interventions are relevant. You're probably hearing me talk about interventions in a very tentative way, a very careful way, much like you would expect an academic to speak about things, but it is important because while I could stand here and say, "Hey, belonging interventions have been shown to be effective, just scale them. Growth mindset works. Tell everyone to have a growth mindset."   That's not the lens or the position I'm coming from. But what I, where I am coming from is if you are able to identify those contingencies within the environments that put into question your adaptive mindsets, then that becomes the foundation of exploring how that manifests for different groups in different environments, which can then lead to the design of an intervention. And it might be one of these psychological interventions, it might be a structural intervention, it might be a financial intervention, based on where the evidence leads you.   [00:38:35] John Nash: It's almost as if you're
EP 29 - Dr. Ericka Hollis - Teaching in the Digital Age: Cultivating Belonging and Excellence Online
Jul 29 2024
EP 29 - Dr. Ericka Hollis - Teaching in the Digital Age: Cultivating Belonging and Excellence Online
In this episode, John and Jason talk to Ericka Hollis, PhD, about silence as liberatory practice, student backchannels, belonging in the online classroom, and leadership challenges with professional development. See complete notes and transcripts at www.onlinelearningpodcast.com Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)* Links and Resources: Great list of foundational articles on the Community of InquiryACUE's Effective Teaching Framework for Higher EducationJohn’s paper on online discussions: “A Tale of Two Forums: One Professor's Path to Improve Learning through a Common Online Teaching Tool”Dr. Ericka Hollis Contact Information ACUE PageEmail: ehollis@acue.orgLinkedInTwiiter / X Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License. Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections! Mic Check [00:00:00] Jason Johnston: Hey, John, could I ask you will you tilt your mic back a little bit? I'm sorry to be so mic-picky these days. [00:00:09] John Nash: Should I talk while I do that? Here's where it was and now I'm still talking and here's where it's going and now it's here. [00:00:17] Jason Johnston: Yeah, that's pretty good. [00:00:19] John Nash: I do appreciate your pickiness. I do. Silence as Liberatory Practice [00:00:21] Jason Johnston: All right. As you can see, this is pretty pretty tight operation we run here. The Online Learning Podcast. Heh. We basically When we started it, we decided that we would just do what we could do. You know what I mean? And we're having a good time. And I think that, I, we're getting some good responses from it. I think people that listen and we produce it up to the level that we can manage. And yeah. And this is it. [00:00:50] John Nash: I especially like the silences. It's a solace, not soul less. It's a SOLACE. [00:00:57] Jason Johnston: Solace. The silences. Yeah. [00:01:00] John Nash: Yes. [00:01:00] Ericka Hollis: One of the effective teaching practices is wait time. Most of the time in education, we don't wait long enough. So for someone to actually think and respond, right? There's research behind that when you jump right in. And so I love awkward silence. I'm really an introvert. Although most of my career, I do things that are very extroverted. So I'm okay with the pause and the solace, if you will, John. Yeah, [00:01:30] John Nash: we'll just do Erica Hollis episode and we'll just have it be 40 minutes of no talking. [00:01:36] Jason Johnston: Yeah. Like John Cage, if you're familiar with his pieces. He sits at the piano and he's got sheet music and it's all blank. After four minutes and thirty three seconds, packs up the sheet music and then goes. But I feel you on that. I'm an introvert as well. And I'm also, I feel like I'm slower, sometimes slower to respond, especially in a classroom where I'm taking in a lot of stimulus. And so I always found in the face to face classrooms, I would think of really like good things to say, like later two hours later, or good questions to ask, but it was rarely like right in the moment. It was like, it was always later which is one of the things I liked about online learning is that it was the asynchronous gave some simmer time for me and some time to think about things and to be able to respond some. [00:02:29] Ericka Hollis: I think that's a fair point. That's one of the reasons I have one of my youngest sister is she has extreme social anxiety, and she has just done so much better in asynchronous online courses, even as an undergraduate student. Just because that works better for her, instead of being like called on in the class, like cold calling, we cold call on people. And some people are like, yeah, they jump right in. And some people you can see like terror in their face when you call on them. And so I think it's a very good point in thinking about who's in your classroom and what actually works for them. And are you giving everyone like the same level playing field where I feel like in a face to face class, even in a synchronous Zoom class, it favors an extrovert, right? One that wants to put their hand up. It doesn't really favor those who are still thinking, still processing, in that kind of way. So that's one of the things I do enjoy about it the most from a like, pedagogical, andragogical standpoint, like the process time, the wait time. [00:03:36] Jason Johnston: So like silence as a liberatory practice. [00:03:42] John Nash: Oh, I like that. [00:03:44] Jason Johnston: I think that makes a lot of sense, and even the way that Zoom is made, those , that feel comfortable being seen, and they have their video on, are going to pop to the top, right? [00:03:57] Ericka Hollis: Yeah, [00:03:57] Jason Johnston: So those that don't say as much, and don't feel comfortable having the video on, they're going to be at the bottom, or even on the second page, if you have a very large class, or [00:04:07] John Nash: Or the third page or the fourth page, I've noticed that. Yeah. You have to go way in to find all the students. [00:04:14] Ericka Hollis: exactly. [00:04:15] Jason Johnston: That's good. So, we've started already. Thank you. That's a good conversation. Intro [00:04:22] John Nash: I'm John Nash here with Jason Johnston. [00:04:26] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the Online Learning Podcast. [00:04:31] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last couple of years about online education. Look, online learning's had its chance to be great, and some of it is, but a lot of it still isn't. How are we going to get there, Jason? [00:04:45] Jason Johnston: That is a great question. How about we do a podcast and talk about it? [00:04:50] John Nash: That's a great idea. What do you want to talk about today? Start [00:04:53] Jason Johnston: In addition to that, how about we do a podcast and invite really cool, wonderful people from our past to talk to as well. Wouldn't that be cool [00:05:02] John Nash: That would be cool. Let's get some good old friends on here and have a good yarn about. "What is up in online?" [00:05:09] Jason Johnston: That sounds good. Today we have with us Dr. Erica Hollis, a good friend of ours from way back at the University of Kentucky. I can say that you're still there, John, but the rest of us have moved on, no, I'm just joking. Erica, welcome. [00:05:25] Ericka Hollis: Thank you so much. I'm enjoying this already. [00:05:28] John Nash: it's so wonderful to have you here. It feels like old home week. [00:05:32] Ericka Hollis: It does. It feels very, I feel very comfortable, and I can't wait to have this conversation with you both. I haven't seen either of you in probably a decade. So, I'm really happy to catch up. [00:05:46] Jason Johnston: Yeah, all of a sudden, we start talking in decades. This is what happened. Now you're younger than both of us, Erica, but this is what happens as you start to, get up there. You start talking and measure your years and in decades. Online PhD Backchannels and Support [00:05:57] John Nash: Yeah, so, Erica, it's wonderful to have you here and we do have a bit of a backstory. We first met when you were a doctoral student at the University of Kentucky. Was that 2012, 2013? [00:06:12] Ericka Hollis: That was 2012, my friend. [00:06:15] John Nash: Yeah. And I, among all the things I remember from your time in the program I I recall that because we were the first online PhD at the University of Kentucky, we hoped that the students would start a back channel and you all were inside of Google chat. I think subsequent cohorts have chosen everything from Voxer to Signal. And But you and Todd Hurst, I think, wrote a paper, did an analysis of all the chat that went on in the back channel and what makes community in an online, and I thought, we're onto something now here. I think that was, but I remember that from your time in the program, and now you've gone on to apply that in so many new ways. It's cool. I can't wait to talk about that, but that, that sticks out. [00:06:59] Ericka Hollis: I definitely remember that. Our backchannel came, you should both know this, came out of necessity. We were in a synchronous class and one of our professors, who I will not name, was talking and someone started the backchannel and said, what is he talking about? Does anyone know what he's talking about? And people started laughing on screen, right? And then everyone started chiming in the professor is talking about this is what we're doing. And the back channel stayed, it's still intact. Like years later, we've graduated, we still use that back channel. I'm not kidding. Like when someone gets promoted or someone has a question or you want someone to look at something, we still use that back channel. And it was Google Hangouts now I think it's called Google Meet or whatever Google has changed to. But yeah, it the back channel was amazing. Um, I have four life colleagues I believe. And I would say the community that we built is, it was just so special. Like I haven't seen. anything like that. And I've tried to figure out how to recreate that in other avenues. And sometimes it goes well and sometimes it doesn't. But giving people the opportunity to figure out how they want to connect and not tell them how to do it, I think is the most important thing, but suggesting that they do. [00:08:23] John Nash: Yeah, that's carried on. And so in the program, we've done just that. We said, we don't care what you create here or how you create it. Just make one and pick a platform. And then, yeah, it's stuck. It's become a necessity. I think. Yeah. [00:08:38] Ericka Hollis: Yeah, I would say what's also interesting, too, is we had our own back channel, and then when the next cohort came in, we started another back channel and included them, but we still kept our separate one. So that one's still intact, and the idea was that it would keep building and building upon each other. [00:08:59] Jason Johnston: Yeah, and I was in a subsequent cohort and we used the back channel approach mostly because of You're going ahead of us and I would say as a PhD student particularly in those early years of building in that coursework and trying to figure out what you're doing. It was really important and nothing against the program. But I think that the program got more organized as things went along and, even I think as I was leaving, John, you guys were pulling together like materials that were very clearly we want everybody to know, [00:09:35] John Nash: Yeah, we have three metrics when we know things are going right in the program is that students say that the faculty have their back, that they are not alone, and they know what to do next. [00:09:47] Jason Johnston: yes. And I think there are ways in which the first two, because. The faculty were great, very personable, and very approachable. I think none of us had question about the first two. I think a lot of times we didn't know what was going on with the third one. Even if it was really clear to faculty and the teachers, we weren't really sure what to do next. And I think that was one of the great strengths of having that back channel as well as just that support. We were all working adults trying to make this happen, and it was crazy, really, to try to think about working full time and getting this stuff done. And it was the support of that back channel was really helpful. So, Erica, you and I, we met at University of Kentucky as well for me and a person that really helped ease my concerns about going to the program as well as just on the front end really helped me know how to guide myself into it because, and I think about this when anybody is going into a new, level of education, right? I didn't have anybody in my family that had ever gotten a PhD before and I was fairly well educated even at that point and been around higher ed, but I still didn't really understand the inside word a little bit. And that's where it was so important to talk to you because like I was looking at this coursework and I was trying to figure out if I could really learn what I wanted to with all this. And you were like, don't worry about that. Yeah. Just find somebody to connect with that can be your chair and just tell them what you want and it's going to be fine. You'll figure it out kind of thing. And it felt like on the front end that maybe that wasn't possible just by looking at the web pages. And then you really were a huge mentor for me and encourager. So thank you for that. First of all. You're probably one of the reasons and meeting John and some of the other faculty, of course, but you're one of the reasons why I actually took the plunge to do my PhD And then the other thing was already your work in online learning. I learned so much from you at University of Kentucky. You're already doing boot camps with people. You were the first that I found at University of Kentucky that was doing more of a standardized kind of templating with people and trying to help people with canvas, try to think about quality matters approach to online learning. And yeah, you're just a super super helpful for me in those back in the day, back in those University of Kentucky days. [00:12:25] Ericka Hollis: Thank you so much, Jason. That, that's a lot done back there, but I really appreciate it. And I love mentoring you. Anyone that is thinking about this program, I'll talk to them and tell them the truth. And the truth was, Jason, that, You are a doctoral student, but this is your program. You need to get out of it what you need, and the faculty are there to help you figure that out, but if you have a somewhat of an idea of what you wanted to study, so some of , our colleagues in our cohorts were K 12 focused. And some of us were higher ed focus. So think about who do you need in your circle and thinking about what you want to do in terms of if you want to study online education and higher ed for me, I wanted to look at online higher education leadership. So then who do I go to for that? Who can help me with that? And the faculty's job is to guide you along, but you're so right that you do need that support. Because we all struggle with imposter syndrome, imposter phenomenon. And so, am I really on the right track? Am I doing this the right way? And, like all of those things that happen. When you're in a learning space, it really doesn't matter if you're getting a PhD or working on an undergraduate degree. Everyone goes through, those challenges. And so, I'm so glad that I had that conversation with you and that you reached out to talk to me. And I'm so glad, even happier, that you decided to do the program. And I think you were a valuable asset to the University of Kentucky. I think the work that you were doing there was so vastly important for the institution. And so I'm just grateful for your work and You as a colleague, because I've been able to send people, after I transition, I was able to send people to you that still were, you know, asking me questions that I could send them over to Jason and I know that you would take care of them and that they would be in good hands, particularly faculty who sometimes don't necessarily want to ask for help. There's a delicate balance there. For those of us who do faculty development, right? Because all faculty wanna put their hand up and say, I don't really know what I'm doing. But if they come to you, they you wanna make sure that you are approaching them and what they're trying, figure out what they're trying to do and so that you can help them get there. Most of the times that requires a very good active listening. Which I would say is one of the most important things any of us can really focus on is like listening to what the person is saying. And so in that conversation with you, that's what I was trying to do. What are your real concerns here? It's a great program. I'm in it. It's a great program. I wouldn't be in it if it wasn't great. So what are you really concerned about here? [00:15:10] Jason Johnston: That's good. And it's, it's interesting to think about those kind of moments. It doesn't feel like it was that long ago, of course now, but I just, so picture, where your office is and talking with you and having that, that face to face conversation and yeah, so, so pivotal. And it's been a good reminder to me, as people come along. Just to be Just to be open in whatever I tend to talk to more about because I'm not in a college. I'm more in a centralized academic unit. I find myself, talking with a lot of instructional designers and people talking about, their futures and people connect with me on LinkedIn. And I try to always be available for people as much as I can, just, for that very thing, just to try to tell them the truth and I appreciate you modeling that. Ericka Background [00:15:53] Jason Johnston: So what so where have you been since the University of Kentucky and talk about that a little bit. We'd like to both maybe catch us up a little bit if there's anything we don't know, but a little bit for our listeners so they understand who you are and then get to what you're doing currently. [00:16:10] Ericka Hollis: Sure. So, I left the University of Kentucky not really wanting to move, so everyone should know that. I really had a hard time leaving, but my spouse had a wonderful opportunity and we moved to Massachusetts, and lo and behold, I landed a job at Harvard. And in the Graduate School of Education, aka HUGSES, like how we refer to it. So I landed a job there as the Assistant Director in the Teaching and Learning Lab. And I'm thinking, lab, this is going to be exciting, I was thinking like maybe it's going to be like the dLab at the University of Kentucky. And what I really found out is that my role and what I was doing was basically, I had a team of instructional designers, video people, and all of those types of people, and they were wonderful and so very good at what they did, but what we produced quality wise, it was really glossy, it looked great, but it was for the entity that actually made money. So a profit making part of the organization, and it really wasn't competency based. So, let's just say I'm going through these online learning modules, they're really well done, to your point earlier, Jason, they meet the quality matters standard, the courses look great, but have they really learned anything? Did I really move the needle in terms of their actual thinking and what they need to do to be better superintendents, to be better principals, to be better educational professionals? And I couldn't say yes that my team was doing that, but that's also not what we were tasked to do. We were tasked to create these things and put them out there basically so we could make money, they were branded with the Harvard brand. And it's not, I'm not knocking them. This is just once I entered into the job, that's what it was. And there was a disconnect between what I thought the job was and what was really happening. And so that didn't really align with who I am and why I decided to get a PhD, right? I care about teaching and learning and moving the needle. Like, I care about that vastly. And so it really didn't align with what I was doing. And so I walked away and kid you not, I walked away with no job. I just left and people were like, "Are you crazy? You left your job at Harvard?" I was like, it was driving me crazy. So I don't want to be in a work environment where it doesn't really align with who I am and what I care about, but also am I putting out high quality learning for people? And so I walked away, no job. I was out of work for a while, like almost eight months, which is very uncomfortable. If you have been a person like me, who's been working since they were 15 and sometimes multiple jobs. So I didn't have, a job. I was doing some things on the side, but not really a day to day. I came across a job ad at Regis College looking for someone to help with faculty development, but in the job description, it said that you needed a nursing degree. And I was like, they really need my, they're, they desperately need my help. They don't even realize that they need someone like me. You don't need a degree in the subject matter to be able to do these things that are on them. So I wrote a very convincing letter, had a great interview process. I think the job started off as a, like a director. By the time I actually started, it was like an associate dean. I had met with so many people over and over. And so the job grew, my responsibilities grew the more and more that they were talking about me. And I had a wonderful experience at Regis. Regis is a private Catholic college located in Western Mass. Had a wonderful time. Working with the faculty there. They're so special. They love their students. If you think about where Regis is in, in higher education, particularly geography, thinking about Massachusetts, right? So Regis is in Massachusetts. Massachusetts is like the higher education Mecca. You throw a stone, there's a university, right? So the students that were going to Regis are not students that were going to Harvard, MIT, BC, BU, right? They're very special students that want to be at a smaller institution that's faith based. And so I had a wonderful time working with the faculty there being really mindful of who their student body was and how do we help them achieve their goals, both online and in the classroom and hybrid. So we had over 1000 online students, which was crazy for a school our size that had about a little over 3000 students total. So about a third of our students were online. And so online grew so quickly that a lot needed to happen in terms of leadership, but in terms of standardization, but in terms of faculty development and actually getting faculty to teach those classes. So it was a very exciting time at Regis. And while I was there, I was promoted to assistant provost. And so I was in charge of the Center for Instructional Innovation and also I was responsible for all faculty development across modalities, and regardless if you were an adjunct or a tenured professor. So that was my responsibilities at that institution. While I was at Regis, I discovered ACUE. So ACUE is the Association of College and University Educators. And while I was there, I was able to use some Title III funding to be able to do some professional development for some faculty. And when I looked at the product and I saw what we actually had, what their offerings were, I was blown away because I know solid design, right? I've been doing online learning for years. I've been creating them. I've had teams to create them. And I was really impressed with what they were doing. So I was able to launch a small group of faculty to do a professional development course around fostering a cultural belonging. And it was a beautiful experience for those faculty. From there, we had a faculty learning community that kind of spun off from there. And so it was very well done. And the magic sauce in ACUE is unlike any other professional development that you do, they make you implement the practices while you're doing them. You implement while you're learning. So over a 10 week, 8 or 10 week time, you implement 8 or 10 new practices. In real time, don't walk off and then come back and say, Oh, yeah, I decided to do that. And so you see if it's working or not. And of course, all the practices are evidence based. And so, I learned all about ACUE and I was like, I drunk the ACUE Kool Aid, right? So I was into and I was like, this is great. So we did a micro credential on fostering a cultural belonging, but then we also did a comprehensive credential called Effective Teaching Practices. And so I was able to co facilitate that program with one of my instructional designers who was new. So it was a great experience for her to get to know the faculty and also for us, the faculty to become familiar with ACUE. So we did that and it was pretty good and I loved it so much and I was so good at it. I had a hundred percent completion rate. Can you believe that? For faculty for something that lasted 10 weeks, they couldn't believe it. And so they invited me to go to conferences and speak with them and tell them like, how did I get the faculty to be able to do this and all of that? And so when they had a job opening for the senior director of academics, I applied. And so I started that job on December 1st. And I've been doing that since. And so I have I lead a team of seven academic directors over the nation that essentially make sure that the faculty course takers implement those practices that are in the courses. I felt like I talked a lot. So let me know what follow up questions that you all might have about that. But that's like my story. [00:24:39] Jason Johnston: Yeah, that's great. So just to put in context, so you've been at ACUE for just a little over two months now at the recording of this. And what do you do in this role now? Sounds like your first connection with them was utilizing some of their professional development at Regis , and now what you'd currently do in your role? [00:25:01] Ericka Hollis: Yeah, that's a great question. So what I do now is I lead the team of academic directors and that team is an esteemed group of higher education professionals, all of which have been either teaching or in higher education for at least 15 years. So our team is responsible for the actual implementation of those practices that are in each one of our courses. We have multiple courses and, certification and all of those things. So our group ensures that the course takers have the best possible experience that they can have. But also that they're implementing those practices so you can think of us as the implementation team almost so, like in online learning, there's like an art. They're the people that build the courses. They're the people that recruit someone to take the courses. We're the team that ensures that the courses are taken. We have the right people in the course, the people are getting what they need in the course. And then, someone does evaluation. So my team just, we implement and we implement well. So we have metrics around that. So I'm responsible for those metrics and, all the individuals on my team. [00:26:17] Jason Johnston: So you're almost like a separate teaching and learning institution in some ways. I don't want to say this to say that it's competition with others that have teaching and learning institution, because you probably collaborate with a lot of teaching and learning departments within universities to provide training as my guess. But you're really, you operate it as this kind of almost like a third party. Entity like that. Does that sound about right? The delivery end of the professional development? [00:26:45] Ericka Hollis: You're spot on Jason, you're spot on. So we meet a lot with CETL directors, centers for teaching and learning. And what we do with them is think about how our programming runs parallel or how it can work with what you're already doing. So, if your faculty are going through this national certification, and by the way, our certification has been endorsed by ACE, the American Council on Education, it's been vetted, right? And we have. A lot of research around it that we do annually. So if you think about it from a teaching and learning center, if you use this as a base or part of what you're already doing, part of your framework that you're doing in your center, we can then partner with you on those things. So some of the things that spin off or maybe a faculty learning community, or, out of the five modules, maybe of those five modules, you do lunch and learns around those five topics or other things like that. So I don't see us as necessarily in competition with them. I see this as like a foundation for which those teaching and learning centers, especially those that are small. So I don't know if those of your listeners, but some of us are in teaching and learning centers. They're like two people. Or it's one person or it's one staff and a faculty member that gets a course release. It's not a comprehensive center where you have tons and tons of staff, so think about what you can actually build, what you can actually produce that's of quality, that's been research based. So that's how I see it, as us partnering with them and using us as part of that, not as a competition. But we do get that question a lot so, if we use ACUE, then what do you need me for? That, that type of thing. But that's not how I view it. Belonging and Humanizing Online Learning [00:28:32] John Nash: Erica, you were talking a minute ago about belonging and that's almost become a synonym for Jason and I and a lot of work we've been doing the last year, some people we've been hanging out with this notion of humanizing online learning. It's hard now to talk about humanizing online learning without the word belonging coming up. And so I'm wondering where your head is at on this with regard to how you think we can make online classes feel more like a place where students feel belonging and less of an information dump. [00:29:02] Ericka Hollis: That's a beautiful question, John. Thank you for asking that question. I think there are a couple of things that I want to just put out here. The first one is, good teaching is good teaching regardless of the modality, right? So if you think about the things that you, that we do that are in a face to face class that we consider good teaching and making our students belong, how then do you do the same thing in an online environment? Or how can you do that in an online environment? Or Can something in the online environment connect you to your student, making them feel even more like they belong in a situation? I think that's the first thing we should think about okay. And then the second is, do you really know who your students are? How can you create an environment? Where people feel like they belong if you don't know them, if you're only thinking of them as an avatar on a screen or an icon, there's a real person behind that name, right? a faculty member and I'm teaching the course, if you say really mean things to me in a discussion board I feel that because I'm a real human being behind that. So I would think about how can we as educators. really get a sense of who's in the course. Do you really know the learners in the course? And like, how do you do that? There are tons of ways to be able to do that. You can have questionnaires, you can have a synchronous meeting. You can do a like wish wonder. John. I've been using like, wish, wonder for years in terms of feedback is something that I learned at the University of Kentucky. [00:30:48] Jason Johnston: I wondered if you would say one sentence about the like wish wonder it feels like something I should know, and it sounds intuitive, but if you, even a sentence just to explain what that is [00:30:59] Ericka Hollis: So what do you like about what's happening? What do you wish was maybe a bit different? And then sometimes we would do wonder, what are you still wondering about? And so I use this all the time. I use it in terms of give feedback to people that I lead. I use it to give feedback to even to my partner on like wonderings, like you said that, but I'm still, I still have wonderings about this thing. So it's become a part of my vernacular and just how I function in terms of giving people feedback. But I also think it's a wonderful tool for learners when we're trying to get them to give others feedback and critical, actual critical, feedback on something like a critique, instead of just saying, Oh, yeah, I like, I really like what you did there. Like, why did you like it? So give me two likes. And what do you wish maybe was different. So that wish requires them to give some kind of substantial feedback, not just I like it, and I thought it was great. And then if you push the envelope a little bit more wondering, so what questions do you still have? did I learned that technique at Stanford from Bernie Roth, Doug Wild and the late Rolf Faste, who were old guard in the mechanical engineering design division and, that's the standard feedback mechanism for getting feedback in a design thinking cycle and [00:32:14] John Nash: and it's, you're right, Erica. The lovely thing about the wish is that it allows you to provide a criticism, but it's. always phrased as a wish that I have for the situation, not something I don't like about what you did. And so, and the backside of this that those guys taught was the only answer you're allowed to give to the like, wish, and wonder is "thank you." But "thank you" is a coded term, which means I caught it, I got it, and if I decide to do something with your feedback later, I will. So now all the agency on taking the feedback and doing something with it is still on me as the receiver of the feedback. I don't have to do what you say. And the person giving the feedback's been taught to phrase it in a way that it's usable, but also it's a way that they own it, not put upon the recipient. So, yeah, it's lovely all around. [00:33:08] Jason Johnston: That's great. That's a great tool. [00:33:10] Ericka Hollis: I love it. And I'm reminded of a quote that I remember from that time, too, is feedback is a gift. You don't return it. Thank you for pointing that part out, John, about the, you get to decide what to do with the feedback once you get it. [00:33:26] John Nash: it's, that's very empowering. And then you don't feel bad about what you're hearing. It's always still with you to decide. You have all the agency. [00:33:35] Ericka Hollis: There's a ton of different ways that you can connect with your students. to make sure that they feel like I see you and you're a part of this learning environment, because that's what we're trying to create when we get people to feel like they belong, this learning community. We want them to participate as learners. So there's that piece. So good teaching is good teaching. Who are you? And then once you're in the course, how are we providing opportunities for the learners to relate to each other. Like learning is social, right? And so if I'm in an online learning environment and all I do is create papers and I don't do discussion boards, I just write, and there's only an interaction to me and the instructor. That's not making, really making me feel like I belong. So if we think about the community of inquiry framework this is a great example of one way to show belonging. How do I use all of those circles to really produce this environment where the students feel like they belong? Are they getting an opportunity to like have their voice heard? Are we giving them choice on what their options are? So, let's just say it's a final project. Is it just a paper? Could I do a paper or a podcast or a presentation or something else? I think those, giving them a variety of choice, also makes them feel like they belong and like their voices are heard. So, these are the things that I think about when I think of how do we make sure our learning environments make the learners feel like they belong? There are a number of things that we can do, and I would say some of them are small, right? So it's not like you have to go off and change everything that you're doing, but I think about James Lane, small teaching, right? I think about Flower Derby, small teaching online. There are certain things that we can do to help produce this, but we have to be intentional. with creating that space and not just think that it'll magically happen. It makes me think of you, John, that article that you wrote many years ago about the discussion boards like build it and they will come. That's not how a discussion board works, right? If you don't have a really good effective prompt that people are just going to chime in, right? You have to create a prompt. What are they responding to that gives them the opportunity? So, those are just some of my thoughts on that. What do you all think? [00:36:10] John Nash: I think you bring up a good point. Two things that come to mind. First of all, go back to the discussion boards. You're right. If you build it, they will not come. So many of them were built for decades. And I wrote that article in 2012, it got published in 2012, I wrote it in 2011, and so discussion boards were happening inside Moodle or other places, and it was basically like, okay y'all post once, reply twice, and then, but then that was the grade. But why, what were we talking about, and why were we talking about it? And so, and then the other thing is you said at the top of this, which is that, I think it's interesting that the push to bring good teaching online has really rekindled conversation around what is good instructional design and you're right back to your teaching point right but so good instructional design is just good instructional design, no matter what I want to hear Jason's thoughts and I want to come back to the discussion boards and when we talk about AI, but yeah, what are you thinking? Community of Inquiry [00:37:16] Jason Johnston: Erika, you talked about the community of inquiry model, which I think is a really strong way to think about online learning Garrison , is the one, fellow Canadian, of course was the one who brought this to mind for me, and it's for those listeners that maybe haven't really heard a lot about this. It's the one where it shows these three overlapping circles that talk about different presences. So you have, depending on who you're reading, talks about the teacher presence, the student presence and the content presence in the, those circles
EP 28 - Spring 24 Check-in focusing on AI in Education: Navigating Ethics, Innovation, Academic Honesty, and the Human Presence online.
Jun 17 2024
EP 28 - Spring 24 Check-in focusing on AI in Education: Navigating Ethics, Innovation, Academic Honesty, and the Human Presence online.
In this Spring 2024 check-in, John and Jason talk about AI-created voices, the importance of human presence in online education, the challenges of AI detection like Turnitin, and insights from their spring conferences and presentations. See complete notes and transcripts at www.onlinelearningpodcast.com Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)* Links and Resources: Eleven labs AI voice generation (on OpenAI)John's deck from his presentation at ASBMB - AI as an instructional designer and a tutor.The Ezra Klein Show - Interviewing Dario Amodei Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License. Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections! False Start: John Nash: Okay, we'll get AI to fix that. Jason Johnston: You can maybe get AI to fix that. Intro: AI Speaker 1: Hi, I’m not John Nash and I’m not here with Jason Johnston. AI Speaker 2: Hey, not-John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast. AI Speaker 1: Yeah, and we are doing this podcast to let you all in on a conversation we’ve been having about online education for the last few years. Look, online learning has had its chance to be great and some of it is, but some of it isn’t. What are we going to do to get to the next stage, not-Jason? AI Speaker 2: That’s a great question. How about we do a podcast and talk about it? AI Speaker 1: That sounds great. What do you want to talk about today? AI Speaker 2: I’ve got a big question for you not-John. Are you ready? AI Speaker 1: Okay, shoot. AI Speaker 2: If we carefully and lovingly create a script for an online learning video (or podcast) but then have AI-created voices read that script. Are we humanizing or de-humanizing online learning? AI Speaker 1: I’m just a text-based large language model chat-bot and I don’t think I’m equipped to answer that question. Maybe we should bring in the real John and Jason? John? Jason? What do you think? John Nash: I think it's a great question, real Jason. Jason Johnston: Yeah, real John. It's it's good to see you in real Zoom. and that is a great question that this our chatbots pose for us today. And I think that yeah, I'm not, what do you have any initial responses to the question if we use AI tools to lovingly create our scripts for online videos or for podcasts, are we dehumanizing or are we, humanizing these experiences John Nash: Well, it's a classic academic answer, isn't it? It depends. Jason Johnston: Depends. John Nash: But I think used exclusively, I think it does dehumanize. I think used judiciously and with an agenda to humanize, I think they could be helpful, but the jury's probably out because it's all context, isn't it? Jason Johnston: Yeah, definitely context and it gets into some philosophical questions as well, when we talk about humanizing. There is the act, there is the perception, right? And so, this goes back to some of the things that are going on even with AI telehealth, and so on. Or AI therapy. If the people don't know, does it matter? Does it feel human? Have they had the experience of being with a human, even though it wasn't a human? And then does it matter? I guess there's a ethical question about, It matters because we want to be transparent and we want to be honest with people and so on. But if at the end of the day they feel like that they've been in a humanized situation and it gives them maybe a positive outcome for them. John Nash: Yes. Yes. Yes. I think we discussed that last year a little bit. Yes. So essentially what we're saying is that if we fake them into feeling belonging, then that's okay. Jason Johnston: yeah. As long as maybe we're not being dishonest with them. Or maybe not, I shouldn't say maybe. As long as we're not being dishonest with them. I think that would be the cutoff for me. If people knew what was going on. John Nash: Okay. Fair. I think so. You say you're about to engage in a scenario that we've created that is designed to help you feel more belonging with regard to the activities we're doing as a group, maybe in our class. We used artificial intelligence, generative AI to create some of that, and we'd like you to engage in it, and then let us know. I think that would, Jason: Yeah, I think so. Yeah. So, we started with this. This was a, there was a moment which you could invoke Eleven Labs this company through Chat GPT, you could invoke their GPT to create voices for you. And I was just playing around with it and came up with these, this intro script because I thought it would be fun just Jason Johnston: to, Just to Jason: start off, I'm not planning to replace you, John, just so you know. There's, I have no intention on replacing you. I'm, I enjoy our conversation too much to and respect you too much as a scholar and as a friend to replace you with just so you know, in case any concern or question. John Nash: I have been trying to get fired from this podcast and I thought this was my chance, but labeled redundant. Isn't that what they say? Jason Johnston: Well, I know you wanted to take the summer off, so maybe, maybe it could be just be like a, maybe a temporary replacement. We could get your voice. Yeah. Summer John, we could do summer John with yeah, that'd be all right. Yeah. John Nash: Well, your new dog, Kevin could take over the podcast for the summer. Yes. Jason Johnston: Yeah. Yeah. He would have some great things to add. I'm sure. The the really interesting thing about this, I'm not saying that this intro is perfect by any means, but, and we've talked about this a couple times, but just how quickly things are moving right now with AI and how even a year ago, that the emotions maybe weren't there with a AI created voices that are starting to come into itself. I think some of the early pushback for AI voices that I have found from an education standpoint is like, well, students aren't going to like it. It sounds too fake. And and so in that way, it's just not going to be a great experience for them. Well, we may be moving past that now in terms of those kind of arguments against AI voices in, in online education. But now we're moving towards, well, maybe it's fine for things. It doesn't matter. Like with, obviously we need to think about teaching presence, right? Community of inquiry. creating a great educational experience for students, having a teaching presence within the online class is super important, makes a difference for students and for teachers. I'm in a hundred percent on all of that. However, still within that, we pay voiceover people to do some slides that are going to be evergreen for us that maybe last beyond a teacher, or maybe are shared among a number of teachers teaching different sections or whatever. And so I think that we're probably just moving to a place where we're going to see more and more of this and online teaching. And I think maybe it's going to be okay. What do you think? John Nash: It reminds me of our conversation in the middle or end of our ethics episode this calendar year where we were discussing I'll call it scope creep or it will job creep. Jason Johnston: Yeah John Nash: I think it depends. Is this going to be a replacement technology, or If there are professionals in your circles who are already doing this work and then a new person comes along who's not it's not their station to do that work, but the technology will allow them to do it. Will they be stepping on toes? That's what the first thing that comes to mind. Yeah, I think there's questions to be answered at every level, as we've talked about before in terms of contextual ethics on this within your departments. And I was thinking about that this last week. I have the advantage at University of Tennessee of having people, we have humans that can do these things, right? Jason Johnston: So it is more of that kind of question about, well, I shouldn't be using AI when we already have humans to do things. But this last week I was at a conference and talked to a lot of people that are a team of one, right? They're expected to produce multiple courses and expected to be high quality. And they're maybe working at a community college or other colleges that are just not as well funded. And I think it maybe is another different answer to the question, maybe in some of those areas. What do you think? John Nash: do. And I think you're right. I think and again, we're in that world where we say it depends. Many professors are teams of one who are managing course loads. they don't have ready access to a center for teaching and learning or a set of instructional designers or production level tools. And so they want to create some evergreen material. Maybe they think their voice isn't up to lecturing for 15 minutes on video and staying stable. So these tools could be useful. Jason Johnston: Yeah. You have a hard time saying completely no across the board for everybody in every place on these kind of things. However, that being said, I think that I'm feeling more confidence, saying no in my particular context on a lot of these things where I prefer for humans to do the human things when it comes to graphics and music and voice and so on. And certainly We don't want to replace professors and have no intention on that, because I do think that those connections, I do believe that you there needs to be trust in a in a real teaching relationship, and I think you build that through that teaching presence and connection with the students, so. John Nash: Yes. And so I think that's probably the framework that we should be talking about all the time is connection and presence. And then if the affordances of these tools, let us advance that. I think we're in a better place. Jason Johnston: Yeah, that's good. Well, we got right into it, didn't we? With the AI voices spurred a conversation, but we did want to do this little kind of spring check in just to see what's going on. So what have you been up to this spring of 2024, John Nash: Spring has been busy, not only with teaching to two courses both in person on campus, but April and May sort of AI related and teaching related. I was I was out and about in different places. I was in, in April, I was at the Lamar Institute of Technology in Beaumont, Texas. Jason Johnston: Okay, John Nash: their professional development day. Really impressive what they do there. Once a year they close-- no classes are held and all employees from classified staff and even, janitorial and buildings and grounds to the provost and president come together for one day of learning on this professional development day. And they decided to focus a little bit on AI and I was invited to give the keynote address. Jason Johnston: nice. John Nash: On AI and the role and future in higher ed. And then I did some workshops. I did a workshop on prompt writing, and I did a workshop on ethics of AI and talking about crafting an ethic of care like we have Jason Johnston: Nice. John Nash: Gave some worksheets for them to think through how teachers could be thoughtful about integrating AI into their work. So that was great. A big shout out to Dr. Angela Hill, who's the provost at Lamar Institute of Tech, and also Beth Knapp, she's the executive director of human resources. They put on a great program. Gosh, and then I was in San Antonio, sort of Texas focused. I was on a panel on AI in the classroom at the annual meeting of the American Society of Biochemistry and Molecular Biology. And so this is a gigantic annual meeting held in the convention center in San Antonio, filled with biochemists and molecular biologists. But this was with Craig Streu from Albion College, John Tansey from Otterbein University. Emily Ruff from Winona University, and Susan Holacek from Arizona State. Have you run across Dr. Holacek's work before? I know you've been running around ASU a bit. But, this was a session on AI in the classroom, and so, in that one, I talked about large language models as two things, as instructional design partner, and as a teaching partner. And so, I talked about the John Hattie bot prompt that Darren Coxon has shared out and how that could be used for instructional design. And then I played up Ethan Mollick's work to do deliberate practice and using turning LLMs into tutors. And so, in fact, I've got a deck that I put on Gamma that we can put in the show notes and everybody can see this live web page I've got with all the links on to a whole bevy of scripts and prompts and stuff that I've got there on that. And then the last one, I added another one too. It was in Nashville. This one was a lot of fun. I was in front of about a thousand folks on a panel at the Healthcare Compliance Association's Compliance Institute in Nashville. Now it was with Brett Short, who's the University of Kentucky's Health Care's Chief Compliance Officer and Chief Privacy Officer. No simple job. I'll tell you Betsy Wade, who's the with Signature Healthcare. She's the VP of Compliance and then an attorney from New York, Christine Mondos, with Ropes Gray. Fascinating discussion about what healthcare compliance officers should be worried about in the presence of AI. And it's not just about, worrying about LLMs and the use of chatbots, but also where AI has penetrated a whole host of medical related software devices and where also healthcare folks may be in compliance or not compliance where they're using AI for patient use that is not licensed for patient use, for instance. It really opened my eyes to the way we've been talking about AI, Jason, about mostly around chatbots and ChatGPT and how LLMs are infiltrating work. But on this other side, in a lot of universities and also across hospitals that have, or universities with medical centers, hospitals there is people may not understand what de-identified data necessarily is. They think things are de-identified when they're not. 26 states are considering laws for use of AI in medical situations and how patients will be informed about their use. It's fascinating. So I think that was a lot of fun, to be able to talk about that. So yeah busy spring around talking about AI. Jason Johnston: that does make for a busy spring. So, yeah, if if you guys noticed that our podcast dropped off a little bit there, you'll know why for a little bit, but we're back at it. I'm curious. So it was really interesting that you're pulled out this Institute of Technology, I think, and then you're with Biochemists, and then you're with healthcare folks. What is the general feeling? Optimistic or pessimistic, would you say out there in the world beyond education with folks? John Nash: it's I think it's a balance. So the my new friends at the Lamar Institute of Tech, they were optimistic. In fact, I was in many ways. I appreciated the provost perspective that a community college where half their graduates go on to four year institutions to, of academics and the other half are going into the workplace because with workforce development. And in that light, they see themselves as needing to compete. And so how might AI make them more competitive in the way they think about their work, what they do day to day, And so let's be sober and forthright about what its possibilities are. I talked to a lot of instructors who are worried about their students using it in academically dishonest ways. And so we talked about ways in which those could be teachable moments, the way they could think through their own assessments. So I think it's a balance, but I think the overall the administration is optimistic. The panel on use in the classroom with the biochemists and molecular biologists was pretty optimistic and all the other panels were talking about ways in which they thought about how it could be used. Some who was it? It was Emily Ruff from Winona did some, has done some empirical work looking at students reactions to it and where it's been helpful and not helpful. So I think it's overall optimistic. The healthcare compliance officers, that is a balance of just I think mostly awareness and being careful that you're not breaking the law or violating patient confidentiality because if you make that mistake, then the federal government comes in. And this is the other big difference between what's happening in that sector, Jason, and what we do day to day, in the academic side of the house is the federal compliance spanking is severe and so you have to be very thoughtful there. Jason Johnston: Yeah, we've got FERPA, of course, but it feels that very rarely the FERPA police come in and actually do much of anything. John Nash: not like the HIPAA police. Jason Johnston: Not like the HIPAA police, which is, makes sense in many ways, because we're dealing with people's health care and yeah, exactly. John Nash: One of the common challenges across all three of these groups is this understanding of whether, the systems that you're using are opened or closed. So for instance, are you inside your institution's walled garden? And is the, is that information that you're feeding into, it's staying there and not feeding the models or is it going outside? That's a big concern in healthcare at any rate, because the tools are so opaque in terms of whether they're AI is baked into almost everything now. I don't know if you use WhatsApp, if you notice, but WhatsApp started to put AI right inside the the app itself at the top. And so, forget the the age 13 gateway is gone now because of all Generative AI is being stuck in all the apps without really being told. So I think that's one thing that everyone had in common there is like, what do we understand about how data gets shared? Jason Johnston: Yeah, it's fascinating. It's again, one of those situations, as you said, with health care and everything else, where AI is just being rolled out. WhatsApp, who's same company as Instagram, same company as Facebook, right? And so you now see it everywhere. You can you can chat with AI. And so it's here. There's no stopping it, really, when it comes to academic dishonesty. I asked my kids a little while ago, where did, did our kids log on to chat GPT and so on? And they're like, Oh, no, mostly they're just like asking Snapchat. John Nash: Yes. Jason Johnston: Yeah. Okay. So what do you do to stop that kind of thing when it's just baked into all the technology that we're using? John Nash: Yes. That's right. And so it makes me think about where this is going is it starts to get not only simple air quotes, simple GPT style chat gets embedded into apps, but then when it all becomes more sophisticated and embedded across other tools, that will be another. I want to talk more about that, but I want to hear what you've been up to. Jason Johnston: it's been a busy semester, on top of all the day to day things that I do. Yeah, lots of hiring. We're growing at University of Tennessee. There's a strong push towards online learning and I think for good reason. We're. we're really trying to reach out to mostly a lot of undergrad folks who have started a degree. They have some college credits. We have almost a million in Tennessee who started undergrad and never finished. And so we're trying to build out those courses. And so we're building up, hired some great new instructional designers. I work with some fantastic people. Very thankful for all that. On top of all that, I, did help lead an AI workshop in April called Thoughtful Teaching with AI. And one of the cool things about this that I really enjoyed is that I was able to partner as part of, we're digital learning, so we're the centralized, like, online learning department. We were able to partner with our teaching and learning office, and shout out to a colleague, Chris Kilgore, there, and then also our writing center. And shout out to Matt Bryant Cheney there. To be able to connect with them and develop basically a and then in some connection with our office of information technology as well and be able to create a workshop together using all of our perspectives and we're able to bring in our different kind of angles and perspectives on this two day basically workshop working with faculty focused around teaching with AI, thinking about creating assignments with AI and how to be thoughtful about that and build it into the curriculum in a way that is human, but a way that is impactful as well. So that was a lot of fun to do and I think interesting. I, as a reminder to those out there that are in similar spaces trying to help professional development and education is that there's still a lot of basic questions out there around dishonesty, as you were talking about around just usage, like, where does my information go? How is it used? What's a good prompt look like? What is a chatbot versus an LLM? And those kind of things. And so we still need to be teaching and talking about these basic kind of things when it comes to AI. So. John Nash: Yeah. So much of what I thought would be solved by now is Jason Johnston: Right. Right. Yeah. And then I just came back, like, yesterday from the Digital Universities Conference in St. Louis. This is a conference that's put on by Times Higher Education, which I was not as familiar with, but I'm very familiar with Inside Higher Ed, and many of our listeners and yourself probably Familiar as well. And I was on a panel with Rachel Brooks from Quality Matters, Flower Darby, Brian Beatty, and with a great moderator from Inside Higher Ed Jamie Ramacciotti and we're talking about achieving access through equitable course design. Had a great conversation and some good feedback from people in the audience. I think it was really interesting just to hear the different kind of approaches about even defining what equitable course design looks like. We've got some things that we all kind of land on in terms of UDL and making things accessible, but beyond that, really, what is the definition and some varied kind of approaches from, Brian And Rachel, we're less likely to really want to land on a definition. Flower Darby, who's done lots of writing in this area, was, had a little clearer kind of idea of how to move ahead. So John Nash: Nice. Nice. And you you mentioned to me, I think that there was also some presentations from some vendors and things, particularly Turnitin was there. Jason Johnston: yeah, it was really interesting. Yeah. John Nash: talk about that? Jason Johnston: Yeah. And, without throwing anybody under the bus at all, but, we do talk about ed tech and we're at UT is a Turnitin university. We have Turnitin on. But it was really clear to me. That they were there on a really strong PR push to I think they've probably gotten a little bit of backlash on some of their AI detection that they turned on and then they turned off. And it was really clear that they were there to, to strongly let people know that they're, Their purpose is student learning and good outcomes for students. It's not for catching cheaters. That's not their focus. I'm not sure if that is It may be that they're doing a little bit of a, not just a rebranding, but a change in terms of their organization itself. I had a hard time, I think, not hearing some of those words without some skepticism um, and without kind of feeling like, That it's easy for them to say that now that maybe they're losing some of their market spot that they had before. And so they're trying to reinvent themselves into something else. I'm not sure. I don't want to, I'm not judging anybody's motivations here for being there. I'm just on face value I think we need to continue to have a a digital critical approach when it comes to working with our ed tech partners. John Nash: Certainly. Does it feel like they still want to try to detect AI written work? Jason Johnston: What was interesting is that they seem to present it as if they could very clearly detect AI written work. There is not time for questions for this person. And so that they the main kind of operating guy, I don't know who it was that was doing a kind of a bit of a keynote talk. There's not time for questions, he just gave a spiel and then left. But yeah, he very clearly kind of demon put on the slides that they're, they're able to detect AI. And this is what it looks like right now. John Nash: Ah. Jason Johnston: There's no chance for me to stand up and I guess I could have stood up and just dissented while he was talking, but I guess I have a little more, John Nash: Yes. Jason Johnston: maybe social John Nash: More decorum?" Jason Johnston: Decorum than that. John Nash: not like, in a British parliament where you stand up and just yell "rubbish." Jason Johnston: exactly and start pounding the desks and so on. Yeah, if I knew this was coming, I could have worn my AI Detectors Don't Work shirt or something like that, and it had more of a silent protest than I, could have just had it on without having to interrupt him. John Nash: Well, fascinating. I don't know what to think of that. I want to believe that we're moving beyond that, but I guess what does a company that's called Turnitin, who's made their way through detecting plagiarism and plain old written essays back when we used to do that, right? What do they do now? Yeah, Jason Johnston: Yeah. Well, you had an experience recently, right? At your school. Are you able to share about that a John Nash: yeah, well, yeah, just a little story from a colleague that I was contrasting in light of a great interview with Dario Amadei, the CEO of Anthropic, which is the company that makes Claude, the LLM, and he recently shared some, pretty mind-bending insights on the Ezra Klein show about how AI is evolving and where it will go this exponential growth in AI tech and that in the next, 18 months to three years, we could see things like AI like planning our trips or it's already writing code. It's going to be integrated into our tools even more so. And this conversation struck a chord with me when I thought about a situation that a fellow professor shared. She had caught a student using AI to write a paper and they turned that paper in and she thought it was written by AI, felt AI. But this same student had sneakily passed it on to another student who submitted it also as their own work. So we have not only academic dishonesty in terms of use of, say, chat GPT, but then full on plagiarizing and cheating in the old traditional sense by this other student. And so I thought, just, but so how she handled that is really not the point of this, but she was throwing up her arms a little bit saying, well, what do we do about this sort of thing? And it was a kind of a snapshot of the massive ethical puzzles we're now facing thanks to the presence of AI now, but also, what Amadei is talking about is AI getting so good at handling complicated stuff that soon chatting with AI is going to feel as natural as talking to, you and me and here we are now today trying to figure out how to keep AI from turning our students into these copy and paste wizards. And so it was a bit of a reality check for me about where we need to be. So my story is really ends with a question. What's the game plan now for us as educators? We're all, we're still stuck trying to figure out what to do about how to assess well in the presence of AI and courses that have AIable assignments. So what will we do? How do we push this conversation next? I think we still have to think about AI as a force for good in education, but that has to come with more conversation. It makes me realize I'm not having enough conversations with my colleagues about ethics of the use of the tool, transparency of the use of the tool and where it can benefit them. Jason Johnston: Yeah, and I think that benefits, ethics, and transparency are things that we can continue to look at, and I think what we can't do is make policies based on where AI is today, right? As Sam Altman, I think, said even just this last week, what you're using right now is the worst AI that you will ever use. It was something like that. John Nash: Did he say that? I know Mollick has been saying that too for, yeah. Jason Johnston: this is it. This is the worst version of ChatGPT that you will ever use. I've heard to the grapevine that, ChatGPT 5 is coming out this fall and it's going to be like a 100x what we've experienced. I don't know what that means exactly but, I just think that we cannot, we can't look at it today and say we've got to make policies based on the quality that we see right now. I think that we can think about some of these other ways that we can approach it that, that should stand the test of time. Transparency. I think whether it's 100x in six months, when we start the fall break or fall semester, whether it's 100x or not, I think transparency will still be a thing we want on the table. Right? John Nash: Yes. Yeah, definitely. I was reading I get a little newsletter every morning called the Bay Area Times. They were noting that OpenAI showed off an unlaunched deepfake voice tool recently. It only requires 15 seconds of reference audio to clone a voice. Now we were talking earlier about, well, it wouldn't be nice if we could have some voice generated material for instruction, but we weren't talking about cloning or deepfaking voices. But if you only need 15 seconds, I think that's that's pretty amazing and frightening. Jason Johnston: Yeah. Yeah. There's a Hard Fork podcast, both you and I are fond of. They just did one, and we can put a link in the notes, where they were talking about a situation where a principal, was was put on suspension because somebody had used a deep fake of his voice that sounded really realistic. And not just realistic, but sounded like the environment that somebody might have just recorded somebody, in a hallway through their phone kind of thing, saying things that he didn't say. John Nash: Yes. And another example in a school in Southern California, I think of students who were suspended for doing deep fake images of female Jason Johnston: Right. John Nash: that were that were pornographic. Really terrible stuff. And I think it shows how important it is for school leaders, both in, P 12 and in higher ed to be thinking about how we'll get in front of the stuff. Do the existing policies you have really get at it? Jason Johnston: Yeah. One of the sessions I was at this last week at Digital Universities conference was by Dr. Robbie Melton, who is the Interim Provost and Vice President for Academic Affairs Technology Innovations at the Tennessee State University. And she was talking about the impact of AI on minority-serving institutions, which hers is one. One of the things that, that she was talking about in this was just that she was stressing, if you do not understand what AI is doing, then you need to. Like, not everybody has to be an expert, but everybody needs to understand the capabilities. And she's like, "This is why, if I'm showing a demonstration, I don't show them, ChatGPT 3. 5. We go for 4, this is why I keep up on all these things so I know exactly where it's at, because people need to understand where it's at and where it's going in terms of its capabilities because people underestimate what's going on." And I think it's the same thing in our schools for really understanding where all of this is at. I think that as leaders, we do need to have at least some sense of where the technology is at today and then where it's going tomorrow. John Nash: So if people are interested in listening to that episode, that was the Ezra Klein show where he interviews Dario Amadei, D A R I O, A M O D E I. And it's a really interesting picture into where one leader of one of the frontier models of generative AI is thinking this will all go. Jason Johnston: Yeah. Those are some great series with Ezra Klein. Again, just for all of us to expand our understanding of where things are at and where they're going. Yeah. Well, it's great to catch up, John. It's nice to see you. And I'm glad to see you after all the busyness of the semester. We've got a couple more podcasts coming up with some amazing guests. And then we'll do a a summer kind of break and wrap up, but yeah, as always our podcast can be found at onlinelearningpodcast.com. Please, wherever you listen to this podcast, if you can do a review. That would help us know how things are going, as well as help the algorithm get it in front of other people that like similar podcasts. And find us on LinkedIn, of course, and we've got the links in our show notes as well. Love to hear back from you about what you think about this podcast and others and and everything that we're saying here. So, John Nash: Please like, comment and subscribe. We, yeah, we have three more episodes in the hopper that are going to come out with some amazing guests. And so I'm excited for those and excited to talk about summer plans after that. Jason Johnston: sounds good. All John Nash: Cool. Talk to you later. Jason Johnston: Talk to you soon. Bye.
EP 27 - Christelle Daceus from Johns Hopkins University - Humanizing Online Learning, Inclusive Practices, and Digital Neo-colonialism
May 20 2024
EP 27 - Christelle Daceus from Johns Hopkins University - Humanizing Online Learning, Inclusive Practices, and Digital Neo-colonialism
In this episode, John and Jason talk to Christelle Daceus of Johns Hopkins University chats about digital neo-colonialism and efforts to humanize online learning through training about AI and promoting inclusive practices. See complete notes and transcripts at www.onlinelearningpodcast.com Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)* Links and Resources: Christelle Daceus, M.Ed., is a Course Support Specialists at the Whiting School of Engineering, Johns Hopkins University, and the Founder and CEO of Excellence Within Reach Watch for Christelle’s book chapter - Coming late 2024 on Springer Nature Press “Using Global Learning through the Collaborative Online International Learning Model to Achieve Sustainable Development Goals by Building Intercultural Competency Skills” coedited by Kelly Tzoumis and Elena Douvlou with a chapter titled “Combatting Virtual Exchange’s Predisposition to Digital Colonialism: Culturally Informed Digital Accessibility as a Tool for Achieving the UN SDGs”Johns Hopkins Excellence in Online Teaching Symposium John & Jason’s 6 Guideposts - Slide Deck (via Gamma.app)Christelle’s symposium video Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License. Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections! [00:00:00] Jason Johnston: What'd you have for breakfast? [00:00:01] Christelle Daceus: I did not have breakfast. I was thinking here that I have two dogs, so that my mornings consist of a lot of making sure they get their walk in and getting my nice kind of walk in the morning and things like that. It helps me start my day. And I spend a lot of time just hydrating, tea, I like, because I think I have a full plate, I would call it. I like to have a really quiet morning, just like the simplest morning that I can have, depending on what my first thing is to do that day. This is my first meeting today, I was like, okay, I'm just gonna chill with the dogs, get into my emails and things like that. [00:00:40] John Nash: Nice. We've been getting more into tea lately. There's wonderful woman-owned emporium near our house called White Willow and they've got a new herbalist and, we picked up a lavender earl gray tea there last night. [00:00:53] Christelle Daceus: Ooh, that sounds good. [00:00:54] John Nash: The little things. I'm John Nash here with Jason Johnston. [00:01:00] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast. [00:01:05] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning's had its chance to be great, and some of it is, but there's still quite a bit that isn't. And Jason, how are we going to get to the next stage? [00:01:20] Jason Johnston: That's a great question. How about we do a podcast and talk about it? [00:01:24] John Nash: That's perfect. What do you want to talk about today? [00:01:27] Jason Johnston: Well, today we're probably going to hit some pretty big themes, John, and it's partly because we have connected with somebody that we first connected with at the Johns Hopkins Online Teaching Excellence Symposium. So we have with us today, Christelle Dacius. Thank you so much for joining us. And we're really just looking forward to talking to you today. [00:01:51] Christelle Daceus: too. Thank you so much. [00:01:54] Jason Johnston: Well, we wanted to get started by just talking a little bit about what is it you do currently? You're connected in with JHU maybe talk about that first, but I also know that you're an entrepreneur. They have other pursuits outside of JHU as well. [00:02:07] Christelle Daceus: Yeah, I am a long time educator. I've had my hands in all things education at various levels. And yeah, now I'm at J. H. U. Working for the School of Engineering, working for the Center for Learning Design and Technology. I work as a course support specialist with the instructional designers and technologists, creating Materials for courses at the School of Engineering at Helmwood making sure that they're accessible and those materials are accessible, like videos have captions and are able to be, process and materials are able to be read by screen readers. And then we also have the faculty forward Academy where we provide professional development for faculty and I have some awesome opportunities to collaborate with the school of education in their international student work group and I'll be working in some workshops for them in April, providing some work with the faculty on AI and different tools and AI and how they can incorporate into learning and a no fear approach to AI because there's a lot of anxiety there. I think for faculty. And that's my goal with that workshop is to meet them in the middle and show them that AI is here. We can't quite get rid of it, but. We can, elevate our learning and how we, work with students. And so I'm super excited for that. I also work in some research with Global Learning, so I have some international partners I'm doing like exciting things with. And we have a book coming out in May or June with Springer Nature Press. And so that book is about global learning and how sustainability in education can be affected by the United Nations sustainable development goals. And so we just launched our book recently again at the world environmental education Congress in Abu Dhabi, just a few weeks ago, and we talked about our book and had a panel there and that was super exciting. Very excited for that work. Obviously it was again like that natural opportunity. I was talking about earlier where it's just I'm meeting good people talking about the good work. And then we started creating some great work together, I'm really excited about that. And then, yeah, like I said, I'm an entrepreneur. So I have a business in Baltimore City, which is an academic center that's really starting to really connect with the community and start starting to grow into a very. Well rounded program which is exciting because I'm just in maybe a few months. But, it's one of those moments where hard work is paying off even in the new pursuit, where a lot of the relationships that I've valued and forged within Baltimore and within education systems and Baltimore City schools are starting to just grow and I'm able to like really reach students. Because just moving here, I'm actually from New Jersey, and I moved here maybe five years ago, and I've had an opportunity to contract in schools and things like that. And, Baltimore City Schools is constantly in the news for their educational needs and things like that. And because my career started in K 12, I really wanted to connect kind of the work that I do at this higher level, right? Accessibility, advocacy, inclusive education, but bring it to a community level. And I think one of the things you guys asked me was about affecting the individual, like, how can we do that work and reach the individual and not just put out the research and all these kinds of things, which is amazing and important to have those conversations and keep pushing forward with. Workshops and conferences and getting those ideas out there. But then I have an opportunity to not only give opportunities to other educators to bring those opportunities to students, but also really, impact the community, a community that needs it, Yeah. I also am a mom and I have a son he's four. His name is Malcolm. He's the greatest. And yeah, I'm just a busy bee. I'm all over the place. But I love everything I do. And I think I have a good balance right now. So I'm lucky to do the things that I love. [00:06:16] Jason Johnston: so we sent you some questions, but like you just. You just landed us with four pretty big things that you do. We could probably spend the entire time talking about any one of those things. So I'm going to have to show some restraint, because there's some things we would like to get to, and why we connected over this, that I think are really important. I don't want to derail anything here, but I was really curious, and I'm sorry to our listeners, because we keep saying that we're going to stop talking about AI, and then it just keeps coming back. [00:06:44] Christelle Daceus: You can, that's what I'm saying. That's the workshop. You cannot run away from AI. I'm so sorry. [00:06:51] Jason Johnston: And we love it. We like, it's really interesting to us. And all the time are like texting each other things. I actually texted my wife yesterday by accident, something I meant to text to John, and it made no sense to her whatsoever. [00:07:05] John Nash: Does that make us work spouses right? [00:07:08] Jason Johnston: I I think so or at least AI spouses. but because every time something comes up, I'm like, Oh, John, did you see this? Did you see that? And he's like sending me stuff back as well. Anyways, tell us a little bit about your approach with the "no fear AI." Cause I really, I haven't heard that particular kind of phrase, but I'm interested because I think we're all in the same space in, in education. [00:07:32] Christelle Daceus: Like I said, with the School of Education, they have a work group that works towards just how do we work with international students and within their own faculty groups they make sure that their programming and professional development includes that kind of work, and so they approached me because, a lot of faculty just don't know what to do. Right? The biggest issue is the plagiarism. Like, how do we keep up with this? How do we know that students are submitting authentic work? And that's the idea behind how , I'm planning for the workshop is, that we're talking about first, what really is AI, right? It's not this solve everything. Like, there's so much more we need to know. There's so much kinks that need to be figured out. And it's so exciting when you see ChatGPT create a menu for you, create a business plan and all these kinds of things. But, people like us who work in online and work with technology, we know that there's like limitation to the authenticity of it to the like humanization of the technology, because there are people who create these technologies. And these people are often in an industry that is dominated by people who look a specific way, right? And so those people have specific ideologies. And so when they're creating their work, they're using their specific values and ideologies and biases to create that work. And it's amazing work, but it's not something that is, Full spectrum hitting the complexity of humanism. And I won't scare the faculty by phrasing it that way. But, that's really the conversation of just, letting them know that there is limitations and as much as it looks like it can do, we are still, we still have the power in our hands, right? Because we have this thing that AI or any kind of technology would never have, which is the human brain. And it's capable of so many things that no matter what we create and no matter how exciting and shiny and new it is it's just never going to be more meaningful than that. And we, the important thing is not allowing it to, right? Not allowing to ourselves to give AI and VR and XR and all these kinds of things. The power to take our human interactions or communication or connections and make them artificial. Right? So yeah, so that's the idea behind the workshop is that we are going to now give them the tools, right? Okay. So what does that look like? You're telling me don't be scared. Don't be nervous and just. Embrace it. Okay. What does that look like when embracing it? Right? And so I want to talk about some faculty that are already doing that. How can we use ChatGPT is what everybody knows to review work that students are turning and tell them, sure, use it, get it out of their system, and they're going to start to recognize if you show them, okay, the reason we're concerned about this is because you're not getting the accurate information, right? So let's have the students sit down and compare some of their own research to ChatGPT's research and on a similar topic and, compare those things and analyze the technology itself, and it's gonna, teach them some things, which is exciting, right? It's going to give us some new things, but at the same time, it's going to help them question. They're learning in an authentic way that it's not just I'm answering the question and that's it. But I'm having this moment where I'm like, I'm thinking about my thinking, right? It's something that is in something that we created within the engineering school. But this metacognition of Remembering that it is a technology, right? It's not our reality. It's just something, a tool that can be applied to the courses, especially online. [00:11:11] John Nash: Wonderful. Really cool. I think, I have a million questions. I, I've been worried about the the historical bias inside the large data sets that these LLMs get built on, even as actors inside universities like mine who are doing sub projects, they can go out and get the, I guess I'm learning about these, but there's the common crawl data set. There's BookCorpus, Wikipedia, these things where the data comes from. And then on top of that, as you just noted, the developers values and ideologies get put on top of that. And so I'm thinking about ways to help others, particularly teachers, see their evolving role , as an actor inside this network of flow of information from the large language model to a learner, whether they're over 13 years old and it's okay for them to use them or whether they're in post secondary. And I'm wondering how you're feeling about that too. I see now teachers are needed more than ever as the mediator between the screen and the learner in helping set up critical conversations. And I'm thinking about these guideposts that we talked about Jason and I did at the symposium at Johns Hopkins and being human to your students and yourself, treat humans as individuals, and you helped us expand on a point which was to recognize that not all humans are present. And so I'm thinking about that. And are you still feeling that way that there's a place for teachers to help learners remember that not all humans have been present in this AI flow of information. [00:12:53] Christelle Daceus: I think the difficult part is having the time for those conversations in the classroom, I think that's where immediately teachers are like, this is just another thing, right? On our plate for us to, have to deliver, but that's where I'm hoping to encourage authentic, interactions and opportunities to have those conversations, right? And so I really try to encourage faculty to. Talk about their own process, their approach to a assignment, right? So let's say we have this AI assignment or whatever assessment that we have in a course and they can talk for a moment, whether that's in the overview of the assignment or in the overview of the module, where they're saying, Okay, here's what's assigned this week. Here's some things that I would keep in mind when I'm approaching this and here's how I would approach, an assessment like this or an assignment like this. And just, remind them that they're not on their own, right? It's not just especially online. It's so easy to just be on the other side of the screen and not really connect. But if you remind them that, hey, I'm still here and, I try to do these things too. I found my way, I think a really good habit that I'd love to see is, that faculty in their course introductions or syllabus can talk about how they got to their role, as a professor like, yes, we have the bios and, tell them a little background, but really what courses that they take, what, how did they approach their learning in those courses? A lot of program, if you think about the school of engineering these are common courses a lot of engineers have to take to reach their programming so a lot of these, more senior engineers and people in the industry, they've had those experience. They've had to approach the learning and it might, the learning might look differently right now, but there's things that work when you're, gaining retention or learning new things that just work, right? And no matter how the learning is approached. And so what I realized is there's an assumption that because you're at a certain level, you just know those things and you should just know how to, you know, um, really organize yourself well enough and organize your course materials, prioritize your learning in an independent way when in actuality, online learning is so new, there's no real approach to it, right? Right. There's no real guideline to, okay, well, this is how the norm of learning online is for the student, right? I think we spend a lot of time making sure that teaching is accurate and like we're putting out good materials and we're accessible and all these things, but then students, they're just told, log in, learn, even though it's different than anything you've ever done for the majority of your academic experience. And. But, do it and do it well. And so yeah, those are the things I think about that technology moves so fast that we forget to step back and make sure that everyone has the steps to apply it and be a part of it and participate. And I think that's what true accessibility is not. Pinpointing the people who are most in need all the time, but sometimes it's if everyone can reach this most likely, that's the best products, right? That's the best experience. And so that's how I approach accessibility and online learning and the design of those courses. [00:16:14] John Nash: I don't want to oversimplify something you just said, but it, did it seem like I was hearing you say that there are too many instructors who take on an online teaching team? Thank you. endeavor, inadvertently throw the students to the wolves a little bit. There's not enough thought going in there to everything. [00:16:32] Christelle Daceus: I'd even say it's at an institutional level because half the time, the faculty or teachers are also being thrown into new technology and they, start the school year and they say, Hey, these are the things we're using our courses. This is the LMS that we're using, teachers don't really have an opportunity to decide on those things, so I think that's really what it is that yes, there's. The aspect that teachers could, step in the ways that I talked about, right? And helping them adjust to the technology. We have to make sure that as an institution, we're reaching them. And me working in K 12, that's the, that's where I see that the most, right? They put these laptops in classrooms and they have all these kinds of very amazing educational technology, but, Half the time, it's just, this is what we're using now. This is how we're, looking at the data, how we're tracking our students progress, and all these kinds of things. And you just have to adapt and what happens to the teachers that can't, right? Which is what happened in higher ed with COVID. Hundreds, thousands of classes all around the country were placed online and everyone said, figure it out [00:17:39] John Nash: Yep. [00:17:40] Christelle Daceus: and not only in higher ed, but then there's all these K 12 kids logging into zoom with no idea what they're doing. And that's the example I would use of just technology moving a touch too fast. Right? We saw an emergency which is the pandemic, and we're shutting down. We're locked down. We're in our house. And someone said, Oh, but we have the technology. We've created this. We've got it, but didn't think, okay, but schools are safe places for students. Right? And especially at the K 12 level, are we making sure that this is safe, right? Are they logging into secure servers and all these kinds of things? That's where you saw Zoom immediately change its entire kind of interface. Very quickly, they were like, oh, we can't allow these Zoom links to be shared all over the place and people are popping into different rooms and things like that. And so you started the more of the enterprise model and for schools and things like that and yeah, which is important. It's important for us to learn, but we don't want to put our most vulnerable people, our most vulnerable stakeholders at risk, which are our students, right? At any level. They are the stakeholders investing, if not their time, with younger students, but also financial investments when you're at the higher ed level, they invested into this product, which is their higher education experience, and they want to make sure that it's high quality and it's reaching them in a meaningful way, right? And they're walking away from that experience. And so when I always say I am so happy I didn't graduate around that time or I wasn't trying to go to college because, that experience of, oh, I'm having my first, second year of college and. All of a sudden, they're like, get off campus and go on your laptop. You still have to, pay that ticket price. You still have to pay, to be there and be present and reach all the same goals, but it's a completely different environment. And we don't even know if you're going to be able to succeed in that environment, but we all just have to. Because we want to, well, this is the colonialist piece, so I won't get too much into that. Um, but yeah, it's just the continuation of capitalism. That's, that was the priority, right? That we needed to keep doors open, we needed to keep institutions pushing and we're literally dealing with a global health pandemic, people's lives are at risk, people are dying And instead of taking a second to make sure we're delivering this essential need, right, of education in the best possible way. It was a little rush and we were, we put kids in danger. We put, institutions in danger in that way. So [00:20:21] Jason Johnston: I feel like whether it's a, global pandemic pushing us in this direction, or maybe a school is pivoting to online or even down to a teacher who's been, asked to move their classes online. I feel like our default is to try to continue the same things that we've been doing, but just stick them online. So if a teacher is very comfortable and this is the way they've always done it with specific kind of assessments or a very lecture based approach that everything just online and all of a sudden becomes just this kind of like same stuff, different package. [00:20:58] Christelle Daceus: it's a folder, right? It's just like holding all the things and we hop online. We do our little lecture or recording and that's learning and the, we try to do interaction through discussion boards and things like that, but I think even the creation of discussion boards and the, is that why did we need to look like replicate discussion? Why did we not instead create moments of authentic discussion, which is harder to, of course, analyze quantitatively , but I understand we have to find a balance, it's not easy, but this is why I say, my approach to, thinking about the professional development of educators is to show them the way, right? Am I making sure that my materials are reaching every student in the room, right? And that means taking a moment to check in on if there's translating opportunities, right? What is the demographic in my room? Am I making sure that the content is culturally relevant to them? Okay. Am I sure that the words that I'm using are sensitive to the kinds of like cultural mindsets that are in my classroom. And sometimes as educators, you're not in a room with people who look like you. I hope most of the time that's not how that looks, and you don't wanna miss opportunities for a student to grow and to reach the really good content that you're trying to deliver because they couldn't access it online, right? Let's think about international students who are checking in online and we have links to sites that in their country are banned. So then we have a student that's okay, but I really want to go to this school, so I'm going to get a VPN. And I'm going to do what I need to do so I can get this degree. And maybe it's normed, but is that really what we want as institutions or as educators that students are risking themselves in a, I guess legal way or judicial way where they have to go this extra mile versus the educator creating unique materials in such a way that they don't have to click on a link, right? The learning is in the LMS. There's interaction there with their peers. They're really having an authentic experience instead of going into another space. Maybe you send that information in a different way. Maybe you have alternatives and you can still have your link, but making sure that they can reach that in some way, right? I've, through this work, found out there's YouTube alternatives and all these kinds of things in places like China and the UAE, getting familiar with that, or at least, in the education, if you know that's a demographic that you serve, that should be a part of your own professional development, right? That you're pursuing how to adjust your teaching for those students. But I think as institutions and as educators, we have to norm those conversations, norm it in a way that I think once you start saying inclusion and diversity and people get, "Oh, but I am, like I am, I'm doing the right thing. I'm doing my best" and everyone's doing their best. But, once you start to put practical steps to it, okay, well, there's things I can just. Add to what I'm already doing and we just enhance overall, just the quality of education. And everybody would ideally. [00:24:24] Jason Johnston: Yeah. Yeah, that theme of intentionality was something that came up over and over again in that J. H. U. symposium and what I hear you saying is part of that intentionality is being able to, is taking the time to do professional development so that you can take a step back, you can think about maybe where some practices need to change, and ideally as part of the professional development. Here are some practical things that you could do today. Maybe some small steps or maybe some individual individual examples of things that, that could be done. [00:24:58] Christelle Daceus: Yeah, and I would say it doesn't have to be the big conference or all these things can be reading, a really good book, a really good author who's familiar with the work [00:25:06] Jason Johnston: Yeah. [00:25:07] Christelle Daceus: If that's, of concern to you relating yourself to the other voices that, Are matching your values of that you want to bring into your classroom. And I would say even at conferences you get to sign up for different sessions and my favorite session to sign up for the small ones that they put in the room that's all the way down the hall. And there's only a couple of attendees because we sit in there, we have amazing conversations, because everyone's being heard. And it's not just anybody talking at you. It's real educators and they're having real conversations and then putting in some action steps. "Okay. How can I help you with this at your institution?" And how can we, collaborate in that way? And even actually, at the conference I went to recently, we had field trips, I think, on the last day of the conference we were on one of the charter buses and a colleague from London, they're working on some environmental work there. We just connected immediately, and he starts talking about how he" is looking for how to elevate the design and meet the community and be inclusive and all these things. I was like, Oh, I love that's what I love. I love to do all those things," and that didn't happen because I sat in his session and, heard all his bullet points and stuff like that. But it's because we came together as educators. We're trying to have an authentic experience where we get to, Abu Dhabi is very sustainable and environmentally aware. And so we were going to a mangrove where they plant trees and expand foliage there. And it was great to have this authentic moment where we were like, "this is just something that I love." And, at conferences is almost like a safe place to nerd out about the things that you really love in education. And so you get into these conversations. "Oh, what do you do?" And then all of a sudden. You found, your match that somewhere in another institution, but doing similar work and seeing that, it works the things that you're doing, but maybe in a different way somewhere else. And you're getting new ideas and we're building education in those ways. So that's what I'd like to see, I think, in the future of professional development and conferences, like having those more authentic, just conversations, open discussion on these real things. Like, how are we really holding back our students by allowing colonialist practices to seep into education where there's one voice, there's one identity that kind of leads the way, right? There's one version of what the the most what is the word? Something that has, I don't know, you're more important because you went to a certain institution, you're from a certain part of the world, or from a certain culture there's a better word for it, but my point is that, we hold our students to a lesser standard when we stop short replicating in person online. When we have educators, creatives, to really come together and are like, "This is an opportunity to create a whole different educational environment that can just reach students in a different way, it doesn't have to be end all be all we don't have to get rid of, schools or anything like that". But there's a lot, especially at the case level where schools are fully online and they're interacting with students like that. But I would hate to think that a student. Spend 12, 14, 15, 16 years of their education, and they're just, staring at the same thing year after year, and they're just reading things online and they're missing opportunities to interact with their peers and grow their ideas and hear. Validation and feedback like we did sitting in the classroom. Yeah, [00:28:52] John Nash: You brought up the notion of colonialism and you've talked a little in the past about digital neo colonialism. Could you give our listeners the digital neo colonialism 101? [00:29:05] Christelle Daceus: Yeah. So, this idea that um, I think I just mentioned colonialist practices are replicated through education. Right? And if we're thinking about imperialism. It's this pursuit of resources, right? In the past, it was the pursuit of humans, right? And the institution of slavery was the exploitation of human labor and human bodies and cultures and the eradication of culture so that other cultures could be elevated and given power socially, economically that stands to this day, right? And. When you don't have the massive institution of slavery, it continues in different ways. And we saw things like the black codes and all the limitations that freed black persons had to deal with after emancipation that kind of limited and how people of color could be successful. And that's just an example at, the domestic level. But then when you really think of it globally, there's just a continued, repression of so many cultures, whether that's in the Caribbean, whether that's in Africa and Asia, these cultures that were impacted by colonialism and intruded upon and some of these places, their Colonizers are still there, right? They have embassies there and offices and, and we just made these laws and all these things. Right? And it's the same thing in education where just like the for profit prison system, right? That's a continuation of enslavement of control over the population is a way to control, consequence to what the larger they decide as what is criminal behavior, what is dangerous to the society that we are trying to uphold? And of course, that's important, but when it's designed based on stereotype and race and, these false ideologies of inferiority due to differences of, skin color or being an immigrant or different economic class, that's when those things get spread further and further, right? And so in education, this looks having international students come to American schools to become more legitimate. That's the word I was looking for earlier, where these institutions legitimize you, right? Whereas you don't have American students going to some of the other institutions because in certain places, like the global south is what they'll call it, right? Those third world countries or whatever you want to call it you don't see American students or British students or Asian students going to those countries because the legitimacy is not there, right? There's the social legitimacy of that degree would not have the same weight, right? Even though I'm sure there's plenty of institutions with great work and they're like, I have partners all over the world. And so what does that do? That brings more economic growth to certain institutions, certain regions, certain countries, brings more influence because this education is legitimate. So the research they're putting out from this institution is more legitimate than those other ones. And so those perspectives from the people who can afford to go to those institutions are then pushed forward, it's this kind of continued. Elevation of a certain voice, right? Of a certain pedagogy, even, right? Again, we're going back to replicating what's in person online. That doesn't work, because, It was already barely working in person, right? We're still figuring that part out. So, you know, We to, to, to replicate something that's not even that doesn't as strong as a foundation is we wanted to online, which is something. We don't even know as much as we can about it becomes just this loose experience, right? Where people aren't getting as much as they're investing into it. I think we're all spending a lot of time getting familiar with technology, investing into it, incorporating into our lives and we want to make sure that what we're getting back is not just a regurgitation of. Colonialist thought of, making sure that the majority is elevated, that the global north is stays in its position. It's an opportunity for the global north to move out of the way and say, yes, because we have this technology that allows us to talk to people from all over the world. This is an opportunity for us to just give them that platform, right? We want to give them the opportunity to speak for themselves. Like we don't need to advocate or save or, any of those things we need to. Just not bombard the industry, right? We don't need to dominate in a way that doesn't leave space for the global south or different institutions or different voices to actually be heard which is something I talk about in my chapter as well. [00:33:59] Jason Johnston: Yeah, this idea, and please correct me if this is not part of what you're talking about here. One of the practical ways of moving forward is this idea of allyship. Does that resonate with you or is that is that different than what you're saying here? [00:34:16] Christelle Daceus: Yeah, I think that's a really good word to put to it, so that I love it when big ideas can be consumable, right? And yeah, it's this authentic allyship. Right, that we remember that, yes, there's pursuits of greater things. However, we don't want to perpetuate competition and capitalism and just growth for the sake of, being bigger than the guy next to you kind of thing, but rather than, if you think about the SDGs, the Sustainable Development Goals, the goal is to really elevate our earth, right? And to expand the longevity of our earth and our climate and making sure that in all aspects, industry and education and health and economic, we're all growing and we all have the same opportunities. To be, players on the world market. And yeah, so the allyship comes from first accepting that, the end all be all is not being the person that's most on top, and even if you are the person that's most on top, there's no problem with helping those that come behind you, right? Or who are in a different position than you are, and bringing them to where you are, right? I think we have to get out of this illusion that technology creates and being online creates that, this is just a person on the screen. It's no, the world is still, if we're connecting the world and we're having these international conversations or conversations with people all over the country, or even in your community, we're not even meeting. I could be in Baltimore still having my Zoom meeting with someone that's a couple blocks down. We don't do that anymore, right? It's oh, I don't want to meet you at your office. I'm just going to hop on Zoom, and that's it. And not forgetting that when we do have in person interactions to make them meaningful, I think, in a new way, because they're becoming less apparent and less available to us and enjoying life in that way, I think. And as professionals, just really, like I said, just recognizing, one where you're coming from and what your strengths, privileges, whatever you want to call it, are. And when you are thinking about enhancing that work or growing that work, making sure that it's not just one voice that you're hearing in your head, right? That you're trying to elevate those other voices that are available to us and trying to learn from us, right? They deserve that. [00:36:51] Jason Johnston: You wonder about what this disembodiment of meeting together will do to our psyches over time, the fact that we're just floating heads here in zoom looking at each other versus being in body with one another. Anyways, that's a whole nother topic . But I but I think I recognize what you're saying there in terms of our meeting together, how, the digital, although can span, because it'd be some amazing affordances to Zoom and we can span distances. We would not be connecting again. I don't know the next time I'm going to be in Baltimore, might be a while. And so this is a wonderful way that we're using digital technology to span a distance that couldn't be
EP 26 - 1st Anniversary Special - Year 1 in review and the educational and ethical considerations around AI-generated music and video.
Apr 1 2024
EP 26 - 1st Anniversary Special - Year 1 in review and the educational and ethical considerations around AI-generated music and video.
In this episode, John and Jason talk IN PERSON, reflecting on year one of their podcast. Keeping with the theme, they also find a few rabbit holes to chase, consider developments in AI, and talk about educational and ethical considerations around AI-generated music and video. See complete notes and transcripts at www.onlinelearningpodcast.com Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)* Links and Resources: Hard Fork PodcastSORA OpenAI VideoAlibaba EMO Video Demo (Jason’s LinkedIn post)Suno.aiSupport Human Artists! GangstagrassMr. Beast on Youtube (not that he needs any more clicks)The makeup brush holder John keeps his pens in Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions! 1 Year Anniversary Special [00:00:00] Jason: Would you happen to have a pen I could borrow? Yeah. [00:00:02] John: Felt blue, black. [00:00:04] Jason: That is amazing. I've just this moment, I just noticed your incredible, your - you've got like a pen store. [00:00:10] John: These are makeup brush holders. [00:00:12] Jason: Oh really? Okay. Black, please. [00:00:15] John: ballpoint, flare [00:00:17] Jason: pen, Flare. Perfect. [00:00:19] John: yeah [00:00:19] Jason: And would you happen to have any sticky notes? That's incredible. You are really set up here. That is something else. [00:00:24] John: I dream that someone, no one visits me. I'm set up for a full-on brainstorming session with a gigantic. Five feet by three-foot whiteboard and 500 colored sticky notes. [00:00:34] Jason: Sticky notes galore. [00:00:35] John: Yeah, I'm ready to change things if anybody wants to come over. [00:00:38] John: I'm John Nash here in the same room with Jason Johnston. [00:00:43] Jason: Hey John, hey everyone, and this is Online Learning in the Second Half, the Online Learning Podcast. [00:00:48] John: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last couple of years about online education. Look, online learning's had its chance to be great, and some of it is, but there's still a lot that quite isn't there. How are we going to get to the next stage, Jason? [00:01:02] Jason: How about we create a podcast and talk about it? [00:01:06] John: How about we do that? How about we create a podcast, do it for a year, and then talk about what that year was like? [00:01:11] Jason: that sounds great! Happy anniversary, John! [00:01:13] John: Happy anniversary, Jason. [00:01:15] Jason: I should have brought you something. I didn't. I'm sorry. How about we go out to lunch and we and we celebrate? [00:01:20] John: yeah, and maybe we can get a demo of the Apple Vision. [00:01:23] Jason: Oh, that'd be cool. Yeah. There's a little place right there where we can grab some lunch and maybe go over to the Apple store. See what's going on. [00:01:30] John: Yeah, [00:01:31] Jason: That would be thematic. A lot of this podcast has been a number of things. One, talking about online learning, but also talking about the new tech and how it might affect online learning in the last year. [00:01:41] John: Yeah. We are EdTech nerds also. [00:01:43] Jason: We are, we tend to nerd out on a few of these things. Today on my way over here, because I had to drive to this podcast today. I didn't do this podcast in my pajamas. [00:01:54] John: Horrors. And you drove yourself. You had to operate a machine to get here. [00:01:59] Jason: But it gave me, afforded me a little bit of time in the car to listen to a podcast. I listened to our first episode. It was kind of nostalgic, [00:02:06] John: you weren't tuning in to our first episode just out of some kind of vanity thing Oh, I love listening to me. [00:02:12] Jason: No, it was not because I like the sound of my own voice. Although after doing a podcast for a year, you get used to it. [00:02:18] John: you don't even know what you sound like. You're just like, [00:02:20] Jason: I listened in because I was curious about what we talked about in our first podcast. Whether or not, what we talked about then rang true in our first year of podcasting and maybe looking ahead to see what's going to be different. And what I found was, we basically talked about. What we were going to talk about, which was online learning, the second half, check. We've been talking about this last year. How technology affects online learning, check. We've definitely had a lot of that. We also had thought our big theme was going to be humanizing online learning. Check. We've had a bunch of that. And also, however, one thing we had slightly wrong. What our topic of the month was, which was AI. [00:03:03] John: Yes. [00:03:04] Jason: It's become the topic of the year, probably. [00:03:07] John: The topic of the Year .5 Yeah. So [00:03:12] Jason: that's the one thing that we probably got wrong. The other thing that I would say that we didn't know about, as we couldn't quite see into the future with this, but one of the big things that you and I have talked about is how much we've enjoyed having guests. We started this as a conversation between you and me. But how great it's been to bring other voices in this year. [00:03:34] John: It has been remarkable to have other voices in. It's been amazing having guests because I feel as though it's a privilege that we get to have this kind of professional development that we create, I guess is how I look at it. And I think we do something for our guests too. They feel good about being able to talk about their work, but the breadth and depth of the things we've talked about with some amazingly smart people has been just a privilege from me. [00:04:01] Jason: Yeah, a privilege. That is a great way to put it. And just to be able to talk with some of these experts the last year to get a completely different for some of them anyways take on the things that we've been talking about has been challenging, informing, guiding for me so that we're not just talking in a vacuum here. Really, our first guest was when we did the podcast Super Friends episode a little less than a year ago at OLC and we did another one just a few episodes ago to wrap up the year and then we had some amazing guests Dr. Michelle Miller Dr. Enilda Romero Hall. Then we were able to talk to Dr Kristen DeCerbo from Khan Academy. And that continues to be a big thing out there. We made a great connection to OLC keynote speaker, Dr. Brandeis Marshall. Michelle Ament, Dr. Alicia Magruder at Johns Hopkins, which actually then led into a podcast recording at their symposium, which was so much fun as [00:05:01] John: That was so fun and so innovative to be able to have a, almost a simulcast of the podcast as the concluding session of an online teaching symposium. It has been good in that regard. And also, a chance to connect these ideas over time with of other things that come across our desk as it were. So, I think about Michelle Miller, and we keep talking about same side pedagogy. that keeps coming up as a relevant thing. Brandeis Marshall's notions of what's un AIable. I continue to talk about that even this morning with a provost from a two-year college in Texas was talking about this. [00:05:41] Jason: You know what's cool? I was talking to somebody at UT the other day who has been listening to our podcast and he quoted Brandeis Marshall from our podcast about [00:05:51] John: that That's fabulous. Yeah. And then. You know what I think surprised me the most over time is how certain things are emerging now, I think that are more important than anything else that's happened with AI in the last 12, 13 months, which is still the topic of ethics. And it's not about the technology. It's not about the advancements. We're coming up in March of 2024. So, it's one year into the old March madness when GPT 4 came out and then I guess Anthropic came out, BARD, all of them were releasing and it was an arms race in March of 2023 to see what these models would look like. And now. We haven't seen in the last 12 months a massive boost in the model capabilities and a bigger discussion, I think that's happened over ethical use and the creation of guidelines, particularly in the education space. [00:06:46] Jason: Yeah. When we recorded, we didn't even know of the existence of chat GPT four at that point when we recorded our first episode a year. [00:06:54] John: ago. No, we did not. [00:06:56] Jason: And so that just started that whole year of recognizing first that AI is a thing. And then all of a sudden people realize, oh, wait, it's actually pretty significant thing. When that next model came out and realized that the real capabilities of AI were Much deeper, much better than what we expected, even on the front. end. [00:07:18] John: but the two guys that run the hard fork. podcast, were talking about how Sidney at the time, now all these name changes, but Sidney was Bing chat, which was Microsoft thing. It, it had told, it was a Kevin Roos or was it Kevin Roos was advised by Sidney to break up with his wife and start dating Sidney. You, similarly, dad your heart broken by Bing. [00:07:43] Jason: Right. I'm being chat and had some very strange conversations with Sidney right in that same time. So, it was just the wild west in some ways that some of the initial concerns of AI kind of were tamed, I would say about those chatbots. [00:08:00] John: Yeah. Yeah, they were. But we were so amazed by the Model 3. 5 we couldn't stop talking about it. We thought we'd be done in a month. [00:08:08] Jason: I would agree After that initial surge I think what we seen is a lot of third-party companies starting to leverage this power and I would say as we predicted A lot of edtech companies that were starting to add to it. And so, we talked about that. We predicted that back last year in March. And then as we were walking the floor and if you look back at our episode number nine, how are ed tech vendors humanizing online education? When we were walking the floor of O.L.C. Nashville at that point, that was March of last year. It was very different even as we were walking the floor in the fall. Conference in terms of who, at least I found, who was already talking about. [00:08:59] John: at that [00:09:00] Jason: point. At that point, they were talking about it, not really implementing it, and we had some interesting kind of responses. And then by the fall they were really advertising ai, [00:09:09] John: AI. In fact, the vendors, I think, that were concerned about having AI be part of their models were the ones that were trying to catch kids cheating. Using AI, not thinking about how AI might be embedded into their tool to advance some feature that they wanted. [00:09:24] Jason: Yeah, it was much more that concern. Yeah, it's interesting. Yeah, I think in the other part that I feel like what we're seeing more of lately in the arms race, and this is why some of our ethical conversations have taken this turn is the capabilities of AI beyond just the chatbot language model into the areas of media when it comes to video. One of the things we've seen in the last few weeks is OpenAI's Sora, S O R A, even just like yesterday, I saw that Alibaba, you know, which I don't even know what it, I've never bought anything from it, but it looks like a place that you can buy, cheap stuff on a wholesale kind of level. They have a model that they're working on for lip syncing that's quite impressive. We can put a link to that model in the chat, but I feel like what we're seeing are these kind of video lipsyncing kind of ideas as well as if you think about what has happened in the last year in terms of image creation, how much better it's gotten. And then even audio. I was doing a a few of these audio demos that are out there right now, one that's actually built into CoPilot that you can ask for it to make a song for you. it's, oh it's cool. It's pretty wild. And maybe we'll make a little clip in here. Okay. Let's. I'll I'll quickly make something and then we'll take a listen to it and and maybe close out the show or something with it at the end. But yeah, you could just put in a prompt saying for it to make a theme song in this style using these lyrics if you want to, and then you can actually edit it edited it afterwards. [00:11:06] John: Are you noticing the same thing I'm noticing too about the sort of seamless integration of generative AI into almost, I don't want to sound hyperbolic, but almost every app now that has been popular, has now decided to seamlessly integrate AI into itself, making its presence in operations that are not very transparent to the user. Or Notion, Copilot you name it, well Canva, they're all putting AI. Operations in Zoom. And I'm wondering if this sort of invisible AI is going to lull users into thinking that this is just part of the app, and it may not actually be AI. I think about Zoom and it's a meeting summary feature. We were talking about this at our university in our policy group because I think a lot of people think, if zoom has this feature, then it must be okay to use. And then It's part of our acceptable use. Maybe it's inside our privacy guidelines, so I'm going to turn it on and we're going to use it, but that's not necessarily the case. And so, if you're recording meetings or you're putting in student data or you're having, I don't know it's interesting to think about because I think it can enhance user experience, but I think you can also lull people into thinking that this is safe AI. [00:12:19] Jason: Yeah, I guess using their brand acceptance, so we work at institutions that there's quite a vetting process to get something inside of our doors. So, we know that we're obviously working with Google, microsoft, and and zoom would probably for us and canvas all four of those. We're both of our institutions. That's, those are the four biggies. [00:12:40] John: Yes. [00:12:41] Jason: And so, you're saying that it is almost like. It feels like if something comes in alongside of those packages or with those packages then it becomes all of a sudden to just accept it. It doesn't have to go through a, yeah, it doesn't have to go through a new vetting process. If all of a sudden, a new, say there's a new video product and this is how we would get AI video summaries. This would have to go through a whole new vetting process, but we're not doing that. It's just just happening. [00:13:08] John: Yeah. And so, if the underlying models are suspect at times, even, if we look at Gemini, Google's Gemini, and as we record this on March 1st, 2024, in the past seven to nine days, they had a major generative AI failure on their imaging model. If those are the underlying engines, if you will, that are, adopted and licensed by these brands accepted tools. Yeah, how safe are things going to be? How do they, can Zoom stop OpenAI if they're using that engine? Can they can't really put new guardrails on top of what it does with the data because the model's the model, I'm not technical enough to know the answer to that, if I'm making sense. [00:13:50] Jason: Yeah, you're making sense. I don't even know if Zoom is using OpenAI, and it, because it just appears, and I think we get a lot of wrappers around things as well, that are really OpenAI. And then we get this new wrap around it and other things that are more like companies that are doing their own thing. So, it's hard to, yeah, it's really hard to track down. [00:14:10] John: and, the question for me becomes even more important to discuss when we think about all the wrappers that have been created for P 12 teachers like Magic School, Diffit, a couple of others come to mind but I don't remember the names, but they're all also running on top of these models that are only as safe as they're made by those developers. so yeah, I think it's, I think it's something to talk about [00:14:34] Jason: You know what's funny? This this audio creation program. So, we got SORA by OpenAI, which is this brand new video. And then we got Suno, S U N O, with this audio that's coming in with copilot anyways. [00:14:51] John: I didn't know it was inside Copilot. So what app are you using in Microsoft to get, to invoke Suno? [00:14:57] Jason: of Copilot. So what app are you using in Microsoft to get, to invoke Suno?. What kind of style should we do today? [00:15:20] John: We're sitting together in a room in Lexington, Kentucky. Can we do some bluegrass? [00:15:25] Jason: Yeah. In a bluegrass style. Any other parameters we want to put on it? Maybe what do we want to have in the, what's really important to us? What do we our year in reflection song here, what do we want in the chorus to really hit home for the listener? [00:15:42] John: That let's see. Human centeredness is the key. and ethics is important and learner outcomes are paramount. [00:15:57] Jason: Okay. Say in the course, make sure to include something about human centered online learning. And then I, I got caught. [00:16:07] John: in your, [00:16:07] Jason: your superlatives. What was the, what were the, what was the second one? [00:16:10] John: Ethical use of AI. [00:16:14] Jason: It should be, maybe we [00:16:16] John: And belonging. Oh, I rented a Okay. Let's see. You shorten your prompt to fit belonging in there? [00:16:24] Jason: Yeah, I'll try to. Nice. [00:16:25] John: Nice. [00:16:27] Jason: Yep, okay, it's creating it. It's going to give me two versions and we can take a listen to both of these. [00:16:32] John: Okay, excellent. [00:16:33] Jason: We can talk about other things. [00:16:34] John: Yeah, while it's cooking, yeah. [00:16:35] Jason: to it. here's what's amazing. The first version is already ready. I thought it was going to take longer. Now the second version is ready. [00:16:45] John: Oh, okay. [00:16:45] Jason: I'm not sure how we're gonna be able to listen to this just because of the current setup here. [00:16:53] John: Let's see what happens. [00:16:54] Jason: But we can put it. Song “Keep on Learnin’ plays in a bluegrass style: [Verse] Gather 'round, folks, and lend an ear There's a podcast here that we hold dear (oh-yeah) It's all about learnin', in an online way Discoverin' new knowledge every single day (ooh) [Chorus] Human centered, always yearnin' For that ethically tech and belonging learnin' (learnin') Tune in and listen, don't you ever stray Online Learning Podcast, we're here to stay (heyy) (Join us now, keep on learnin') (Oh-yeah, yeah-yeah) Keep on learnin' (Oh-yeah, yeah-yeah) Keep on learnin' (Oh-yeah, yeah-yeah) Keep on learnin' [00:17:00] John: ha ha ha… [00:17:08] John: oh, a little Cher. What? You're the audio guy. What is that? [00:17:12] Jason: it's like a little new, yeah, it's a, like a new bluegrass. it's [00:17:17] John: it's a little country though. I think it's not quite. [00:17:20] Jason: quite bluegrass, Yeah, it's not quite. [00:17:21] John: but. [00:17:22] Jason: Okay, that was a, so that was the first one. It's called keep on learning with a little apostrophe. Keep on learnin'. [00:17:28] John: There's two people singing apparently in this, and there's someone who goes, "oh yeah." [00:17:33] Jason: the things that impress me are a year ago, since this is a podcast and review a little bit, a year ago. Not even close, the things that were out there that you could create music and it sounded like a mishmash, like something that you would hear on like a Star Wars film that they're trying to make it sound different and spacey and non-human. [00:17:56] John: Or it was the third or fourth duplication on your Maxell tape. Yes. Yeah. And it just degradated and degradated. [00:18:06] Jason: So, first thing that impressed me is just where we've come in a year, the quality the second, the kind of the clever turnarounds on the lyrics. And then the third, adding pop elements that are very catchy for the listener, these kinds of echoes, as you said, and so on. [00:18:26] John: Yeah, for The TikTok nation. [00:18:28] Jason: The TikTok Nation. [00:18:29] John: Yeah, [00:18:30] Jason: Yeah, which is basically all of our listeners, right? TikTok nation. [00:18:32] John: Basically, yes, that's right. [00:18:34] Jason: Listen up, TikTok Nation. Is that how we should start our podcast? [00:18:37] John: Maybe our podcast should be 60 seconds long if we want to, if we want to capture them. [00:18:43] Jason: Okay, here's the second one. That was Keep on Learnin'. This is this is called Learning in Harmony. Uplifting folksy bluegrass. [Verse] Well, gather 'round folks, I've got a story to tell 'Bout a podcast that's got a lot to propel Online Learning Podcast, it's the name Where knowledge and wisdom come together like a flame [Chorus] In the world of bytes and screens, we find our way Human-centered online learning, come what may From the hills to the valleys, we all belong Ethical tech use, we'll sing this song [00:18:50] Jason: not sure about the chord progressions in that one. [00:18:53] John: More than I would about that. I would. And this is I put these out here with full understanding that part of my brain and heart is, " wow, this is so cool that technology can do this." [00:19:05] Jason: Another part of me who, I've written a few songs in my life, and I enjoy playing guitar and there was probably even a moment that if the winds of success had taken me in direction, I would have done full time music. And it's both scary and a little offensive when I think on that side of it. [00:19:22] John: Yeah. So, let's go to the offensive part because I think we're both having conversations with colleagues and I'm also seeing reports online of research on where instructional design is going with AI and how these tools SORA and others are putting. Making graphic designers drone operators who do B roll feel a little at risk. And I think, I bet there's some offensive feelings there too about their art. [00:19:48] Jason: Yeah. Actually, it's not completely true that I make 0 a year from my music. John, I've I'm raking in some Spotify money. I didn't know if you knew this or not. Yeah. It's I think I get like point zero. zero three cents per play and yeah, I think my last cash out was maybe around 2 or something. Yeah. So, I really am a professional musician, but I say that to say that This is not something I'm trying to make a livelihood from. It also is not something that feeds my own sense of self worth at this point in my life. [00:20:28] John: Yeah, but how would you feel if you were trying to make your livelihood from this? [00:20:31] Jason: I think particularly I; I think it would depend on the person and what I was trying to do. But I would say almost every musician would feel. A little scathed by this because even if their livelihood is mostly playing live concerts, which this is not going. [00:20:49] John: No. [00:20:50] Jason: And developing a fan base, which this is not going to do part of your livelihood is getting yourself noticed in this enormous sea of other talent that's out there. And then also, I know people that are, they're singer songwriters is how they make their living. But it's great to get those what they call sync royalties when you get a song placed in a movie or a TV show. [00:21:14] John: I was just thinking about that because I'm wondering what Hollywood will do with this capability. I think that Hollywood feels like they want to protect the rights and the livelihoods of artists writ large. So, they probably wouldn't do what I'm suggesting, but television production could decide to use Suno to do the theme songs for new TV shows. I'm thinking about one of my favorite bands is Gangstagrass. They're a band. [00:21:37] Jason: Oh yeah. I love them. [00:21:38] John: Yeah, they blend, if folks don't know, they blend bluegrass and hip hop and they're amazing. They're amazing. I've seen them three times. They're coming to town here in Lexington soon. We're going to go see them. But my point here is that they became more famous because their music was used as the opening theme song for the television show Justified. And if I wanted to do that again, if I were in production, could I just skip all that and just have a theme song written right off the bat from AI. [00:22:06] Jason: Yeah, if you're looking for a particular kind of sound and that kind of mix, you wanted something a little gritty but Southern, but also urban, then that would do it. And then, essentially, while I was talking, Suno was able to recreate our learning theme song in a bluegrass hip hop style, right? So, you think about how quickly this can happen at the capabilities that we have today. And this is, here's song number one. Verse] Well, gather 'round folks, let me tell you a tale 'Bout a podcast that'll make you wanna prevail (oh yeah) With a blend of hip hop and old-time string We're gonna dive deep, learnin' ev'rything (ooh-yeah) [Chorus] Human centered online learnin', take a seat on the track Ethical tech use, we ain't gonna lack Belongin' is the rhythm, that's our podcast groove Put your hands in the air, let the beat make you move [00:22:34] Jason: And this is, song number two. [Verse] Well, gather 'round now, y'all, let me tell you a tale 'Bout a podcast that's bridgin' the gap without fail It's online learnin', it's the way of the world With a touch of bluegrass and some beats that'll twirl From the hills of Kentucky, to the streets of the city This podcast brings the vibes, all witty and gritty Talkin' 'bout human-centered online learning, y'all And ethical tech use, that's what we're all 'bout [Chorus] Come on now, let's sing it loud and clear Human-centered learnin' and ethical tech use right here Belonging is the key, come join the crowd Discover new knowledge, sing it out proud (yeehaw) [00:22:36] John: oh my. [00:22:38] Jason: The second one particularly, I'm a fan of Gangstagrass. That second one particularly [00:22:42] John: hit. it, it approached it. [00:22:44] Jason: Old school. Yes. Hip hpehop. [00:22:45] John: but that first one, I don't wanna offend anybody. I don't know what that was. Was that some kind of Toby Keith kind of thing? I'm. I'm out on that. That's but and that's funny how musical tastes run to also I'm not a big, like traditional country fan, like CMA style country, but I'll go to every Gangstagrass concert I can get my hands on. But you're right. The second one approached it, but still, and then I started thinking about cultural appropriation and what is this? Yeah. This is AI's attempt at understanding culture, which is, that's risky. Yeah, [00:23:16] Jason: Yeah, we got yes, tricky waters right there. [00:23:19] John: Incredibly tricky. [00:23:21] Jason: so, we've Talked about just ethically doing this in light of the musicians themselves, but I'm watching I'm a big jazz fan as well. I like a lot of different kinds of music, but I'm a big jazz fan. So, I'm watching the Ken Burns series on jazz, which I highly recommend. It's slow. It's long, but it's beautiful. But how many times have we taken an art form as a dominant white race from another people group and then appropriated it because we figured out how we could monetize different way. Or in this kind of case, how can we non monetize it? So, we're maybe they're not even making money off of this song. So maybe these aren't going to show up on iTunes. Cause I know iTunes has made some rules about this. YouTube has now made some rules about this, but maybe they'll show up in the next ad for whatever, and they've made it for free. So basically, the Suno terms of agreements is that if you pay for it, you have full mechanical rights to these songs. [00:24:25] John: So, if I make a Suno song, were you logged into your University of Tennessee controlled garden of this? So, if I make a Suno song inside my University of Kentucky controlled garden of Co Pilot, does the University of Kentucky own the, that song? [00:24:40] Jason: That's getting into the whole intellectual property end of things. That's a whole They They have the mechanical rights to this really crappy culturally appropriated piece of junk that I created. [00:24:51] John: And you're right, but look, how much of advertising now... I'm shocked now I've cut the cord on my television and whenever I accidentally happen to go back onto watching network TV or watch my local news. I'm shocked and also simultaneously not shocked that the insipid advertising that I grew up with in the 70s really hasn't changed much. So, your comment about Madison Avenue using tools like this to create jingles and other things to cut out artists for their clients. Absolutely. I bet they'll do it. I'm very cynical about this. I think it'll, yeah, I think that's where this is going. [00:25:26] Jason: And you talked about networks and maybe some of the big ones will, for the sake of their already large group of customers, perhaps they'll make some rules about this to please people, but the networks are not just competing with other networks. They're competing with Mr. Beast. [00:25:43] John: Yes, they are. Yes, [00:25:45] Jason: Like Mr. Beast is enormous. And he has a enormous viewership, and my guess is that he probably, his income per year probably rivals some of these smaller, if not networks, maybe some of these smaller production houses for sure. And I only know about Mr. Beast because I have teenage kids who drive these whole things. One, one of my kids particularly. And also, Dude Perfect they're not utilizing traditional streams, and so they're not going to be beholden to these kinds of larger ethical kind of, restrictions. [00:26:18] John: Now, Mr. Beast is for folks who don't know, what would you, how would you describe him? He's an internet creator. I'm logging on to Variety. com. His annual earnings hit 82 million dollars last year, more than double any other digital creator and, and it's also, it's funny, his name, Mr. Beast sounds for those who aren't in the know, like some kind of awful weird guy, but he's just this, it's just this young guy, right? [00:26:44] Jason: Yep, he seems to be, like, who knows, I've listened to some other podcasts that talk about him and so on, and actually even the Hard Fork that we mentioned, I think they talk about him one time-- his kind of use of YouTube who knows what all his motivations are, regardless, he does give away a lot of things, and he seems to be fairly kind to people in that [00:27:00] John: in that way. His real name is Jimmy Donaldson, for the [00:27:03] Jason: Oh yeah, yeah, of course I know that I've, I follow him on LinkedIn, [00:27:06] John: oh, you're going to be a gigantic creator on LinkedIn now with the beast. [00:27:11] Jason: Our connection is pending, is pending, so yeah, remarkable. My kids watched Rhett and Link throughout, do your kids watch Rhett and Link? [00:27:20] John: Okay they're at 35 million, second place, but but they're 50 million away from Mr. Beast. [00:27:24] Jason: Yeah, that's wild. I think that points to the fact that ethics is a huge topic right now and our one of our last podcasts was about this We can't rely on the companies coming up with the ethics to guide. [00:27:38] John: No. [00:27:39] Jason: partly because it won't be Comprehensive enough, it's one thing if Apple comes up with some ethics or Microsoft. But not everybody's gonna abide by these rules, and there's gonna be so many startups that would, [00:27:54] John: just Mm hmm. [00:27:55] Jason: do an end run around any of these kind of companies to get a few more views. [00:28:00] John: Yeah. I think that as we talked about in that episode on ethics, I think we've got two sets of ethical books going one by the companies to be sure that they can sell as well as possible. So, I'm calling those the kind of less, less ethical set of books. And there's a public persona of wanting to be safe. And so, the, they put in enough guardrails through their red teaming and things like that. So, we can't get instructions to do awful things, but then they stop right there. After that you're on your own. [00:28:28] Jason: Yeah, and depending on what AI you use, and you can always find one that can do what you want it to do. [00:28:33] John: that's right. Or you download your own LLM, you get a llama and run it on your own. And then you can, there's no guardrails, no red teaming. [00:28:41] Jason: It's crazy. I had a little bit of space this week to go follow some rabbit trails and one of them was looking at Hugging Face, trying to understand a little bit about what this is all about. And it's a place where you can actually download models. So, you talked about this one model. But have you been on here? Should I ask the question? [00:28:59] John: Should I quiz you on this? No do not quiz me on this. [00:29:02] Jason: Those listening, I won't quiz John on this because, it's. It's hard not to be in the know sometimes about, a Hugging Face. I didn't know, I had no idea that this was going on. [00:29:14] John: I just want to say that I'm comfortable being in the dark around. you. Because you're kind to me, in [00:29:19] Jason: to me. Oh good, that's great. And I put this out here to say I'm oblivious and I don't really understand all the implications of this. However, right now, on Hugging Face, which is more of an open-source AI model arbitrator almost, there are currently, and I'll take a pause here, podcast listeners, guess, podcast listeners, to yourself, or to somebody you're listening with, maybe say it out loud, how many models do you think there are right now to download on Hugging Face LLM models. [00:29:49] John: Okay. And while people are thinking about that, and I will too. So, what you're saying is that how you Hugging Face is actually sounds like it's a kind of a marketplace for large language models that like, or you make your own sort of, I'm air quoting "GPTs" and then you can go get one and download it and run it yourself. [00:30:06] Jason: Yes. I would call it more of a GitHub. [00:30:09] John: a [00:30:09] Jason: a marketplace. I didn't say anything for sale. No. And so, you create, it feels like GitHub when you get there. Where you can do different forks of different [00:30:17] John: LLMs and on this LLM landscape inside Hugging Face Are they, do they have special purposes, some of these, so they're in that way. They're like like the GPTs that you could make for [00:30:28] Jason: Exactly. Okay. So, all these would have different purposes. So, this isn't, aren't like the big models we're talking about. Many of them are leveraging these big models. [00:30:36] John: Okay, cool. [00:30:36] Jason: these are GPTs. Many of them that you can download and use. Most of them that you can use on your own computer. Your own home computer. Okay. [00:30:45] John: All right. So how many are there out there. [00:30:47] Jason: Right now, as of today March 1st, 2024, and this will change. Currently there are 531, 270 thousand that one could download. [00:31:02] John: Little large language models, little AIs Yep. That I can then pull onto my hard drive and never have to get on the internet and ask it anything. I darn well, please. [00:31:13] Jason: Exactly. Yeah. If we're gonna we were talking about our one-year retrospective. Some of our predictions about what we were going to talk about last year were true. We thought, we didn't know we were going to be talking about AI for this long, and it would move this quickly, was one of the, one of the differences from last year of doing this podcast. Here's what I think with all these creative elements, that it's going to start by some professors thinking "I don't need a production company to help me do these things and they're going to create maybe just for fun at the beginning a theme song for the class or a video of them teaching the class in Mandarin or or the class being taught by some historical character with their voice to it, or, using some images in their slides, which is already happening, right? And that are created. And at first, it's going to be a little gimmicky, and then we're going to cross a threshold where it, A, is no longer gimmicky, and B, it actually starts to affect workflow and the people that we use for doing this work, particularly at large institutions. What do you think of that kind of prediction? [00:32:27] John: I don't know. We'll have to see. I think based upon some of the surveying I'm doing before I go talk with groups about whether or not they've ever even used a large language model, used chat GPT, for 50%. routinely state that they either have never used it in their lives or have used it once or twice ever in the time since it's come out. [00:32:51] Jason: So that's over.a year. [00:32:52] John: And so if half of our educators out there are in that space, then I don't think that they're going to be using these models in any way in a deliberate way to advance their teaching and learning goals, and they'll be using them however, the platforms like we talked about before, as platforms start to integrate these tools into them, that's how they'll get used. [00:33:14] Jason: Yeah. I think you're right. Yeah. The average professor, I agree, is not going to be going into Hugging Face and probably downloading and creating. [00:33:22] John: I was just going to say the same thing. I'm crazy enough to
EP 25 - AI Guidance from Oregon State University Ecampus with Karen Watté
Mar 20 2024
EP 25 - AI Guidance from Oregon State University Ecampus with Karen Watté
In this episode, John and Jason talk to Karen Watté, the Senior Director of Course Development and Training at Oregon State University’s Ecampus about their free tools for AI guidance in higher education and how to humanize online education. See complete notes and transcripts at www.onlinelearningpodcast.com Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)* Links and Resources: Oregon State University - eCampus AI Tools: https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/ )Michelle Miller’s Newsletter: Teaching from the Same Side https://michellemillerphd.substack.com/p/r3-117-september-15-2023-reflectionOSU eCampus Readiness Playbook https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/readiness-playbook/ Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!   [00:00:01] Jason Johnston: I picture everyone in Oregon in Log cabins and so on. Is that correct? [00:00:04] Karen Watté: no, not at all. [00:00:06] Jason Johnston: What? [00:00:07] Karen Watté: I always say tell our candidates who are coming, I say, we have the best of both worlds. You're an hour from some beautiful ski areas, you're an hour from the coast. And boy, if you wanna see the desert, you just head on a little bit further. And we've got the high desert. So, we've got something of every, for everyone here. I've lived other places too and I come back, and I say, oh, this is, this has got it all. [00:00:31] Jason Johnston: I grew up in Canada, and sometimes we would talk to people about the igloos that we lived in and having to check our dog sleds at the border and those kinds of things. Sometimes they believed us, sometimes they didn't. [00:00:44] Karen Watté: Yeah. [00:00:45] John Nash: I'm John Nash here with Jason Johnston. [00:00:48] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast. [00:00:53] John Nash: we're doing this podcast to let you in on a conversation that we've been having for the last couple of years about online education. Look, online learning's had its chance to be great and some of it is, but there's still a lot that really isn't. So, Jason, how are we going to get to the next stage? [00:01:08] Jason Johnston: That is a great question. How about we do a podcast and talk about it? [00:01:13] John Nash: I love that idea. What do you want to talk about today? [00:01:16] Jason Johnston: I am really excited to be talking today with Karen Watté. She's the Senior Director of Course Development and Training at the Ecampus Oregon State University. Welcome, Karen. How are you? [00:01:28] Karen Watté: I'm good. Thank you. [00:01:29] Jason Johnston: We, connected at OLC, Online Learning Consortium conference as part of their leadership day that they do ahead of time, and it was very fortuitous, I think, because we had just come through this summer where everybody was scrambling around AI, trying to figure out what to do, and while we were, trying to come up with some ideas and so on all of a sudden Oregon State had a full-fledged website built out with resources and stuff like that. And we're like, this is amazing. Over here at University of Tennessee and it was really well done. So, we got chatting about that at OLC and then we got chatting about being on the podcast. So, thanks for joining us. Cause I'm really excited about talking with you today. [00:02:10] Karen Watté: Yeah. Thanks for inviting me. Glad to be here. [00:02:12] Jason Johnston: Tell us a little bit about what you do at Oregon State and your role there. [00:02:17] Karen Watté: Yeah, as you mentioned, I'm the Senior Director of Course Development and Training with eCampus, and at Oregon State, eCampus is a centralized distance education unit, so we're serving all of the colleges within OSU. We have about 13,000 fully online students that we serve, and that's about one third of all the students enrolled at Oregon State are fully distanced. [00:02:42] John Nash: Wow, a third of them. Do you know what history is of deciding to do a centralized distance learning unit? I know some campuses do that, some campuses don't, and I'm curious a little bit about that. [00:02:54] Karen Watté: We've been in online for quite a long time, 20 plus years, and we are, the Oregon State is the land grant institution in Oregon, and maybe 25 plus years ago, we were doing the television based learning, and sending it out to everyone in the state, and that unit, of course, was extremely small, and as online learning developed, it changed and morphed into what it is today. And it, so it's always been that central support unit and the way that the funding was established at OSU to support that unit encouraged it to remain a centralized space. [00:03:33] John Nash: I see. [00:03:34] Karen Watté: It's been a really a nice advantage, I think, for OSU to have that, that centralized. [00:03:38] John Nash: Yeah, I get the sense that there are advantages to it. my institution isn't so centralized. It still has a unit supports that, but it's not connected to tight instructional design support I'm sure that there's disadvantages to what you said, something that was interesting, which is, I think, we're the land grant institution here at the University of Kentucky, but it's something about funding from 50 years ago that seems to set these things in motion. And so, it sounds like, yeah, that was a centralized sort of ITV unit and sort of things like that. And then it moved into that. Yeah. It's interesting. More decentralized here. [00:04:13] Jason Johnston: Yeah, and we're, we are also the land grant here in Tennessee, so I think that we've got a common thread here. And I think as we've talked about, becoming really a modern land grant some of it is strategically thinking about how are we going to continue to serve everyone in Tennessee, right? And in the olden days, it was setting up their outposts in every county. We've got 95 counties, I think, in Tennessee and setting up Outposts there. And in these days, we're talking a lot more about online learning and about trying to connect there's almost a million Tennesseans who started their undergrad degree and didn't finish it. And how do we serve those students in 2024 to help them move forward? So that's good. I knew there was something else that probably connected us on a deeper level and it's that land grant, I think. And you direct the course development and training. So, does that mean both like from a production standpoint developing the courses and then also professional development for teachers? [00:05:16] Karen Watté: Yes. Yeah. So, my particular team, we have about 45 professionals. We're about half instructional designers, and then the other half is a media development unit. And we have a handful of folks that also focus just on faculty development. But our media unit does videography, animation. We have Quite a number of programmers. And so, we do a lot of work. We're basically the faculty facing side of our, of Ecampus. [00:05:43] Jason Johnston: And so how many are dedicated then within your 40 some odd with professional development? [00:05:49] Karen Watté: In terms of just doing faculty development and training, I would say we have about 3 individuals that really focus on that, but all of our instructional design staff as part of their duties, they also provide training, and support that could be one on one, but it could also be in assisting with specialized trainings that we're putting together for faculty as well. [00:06:13] Jason Johnston: So, did you get to this role like through like a faculty pathway or instructional design or media or how'd you get here? [00:06:21] Karen Watté: I have a unique background. Years ago in the early 2000s, I was actually, after I got my MBA, I was working in private industry as an operations manager for FedEx Logistics, which was embedded into Hewlett Packard, which If you are aware, we have a huge Hewlett Packard facility here in Corvallis, Oregon. And then prior to coming to OSU for about seven years or so, I was actually faculty at a local community college in their business technology and computer systems department. And then I went to OSU about 15 years, and I started in faculty development and training with eCampus and really establishing the foundational trainings that we base a lot of our course developments on today. And then I just moved up as eCampus has grown, because eCampus has grown quite dramatically, and I would say in the last 10 years especially. [00:07:17] John Nash: What infrastructure was in place for you to come into your role at OSU and start to do that training? Or did you bring your experience from your past positions in and start to develop that? [00:07:28] Karen Watté: Well, I brought in a lot of my previous experience, and then, when I started, my unit had, I was the fourth person to be hired into this unit. And so, then we hired on an instructional designer who actually is our, is my supervisor right now, Shannon Riggs, and she and I together crafted the foundational trainings that go into what we provide for faculty today. And of course, there's been many improvements since we've brought on, very skilled people, and then they've added to this suite of trainings, but we started it about 15 years ago when we came in. She had come from a Quality Matters institution. I, of course, had, background in, in training, both in private industry and then at the community college as well. And together we put this program in place. [00:08:20] John Nash: Yeah. And then together you've grown it. What did you say? 40 folks? [00:08:25] Karen Watté: We have on in our team, I have about 45 folks all Ecampus as a whole is about a slightly over 100 staff. [00:08:35] Jason Johnston: And what's the online population these days at Oregon State? I know you talked about in terms of the number, the percentages of Oregon State, but how many online? [00:08:46] Karen Watté: So, we have a little over 13,000 fully online students. And like I had mentioned, it's one in every three OSU students now is a fully distance student. But in terms of, how many students do we touch every year? I think our last report showed that we had 29,000 unique students who took an eCampus course because a lot of our campus-based students will also take an eCampus course here or there during the year. It's, they find it very helpful and allows them to have a flexible schedule. [00:09:20] John Nash: Yeah. Cool. [00:09:21] Jason Johnston: Going back to our earlier note about these AI resources, and we'll put the link for people that are listening into the chat. But I just thought there's a number of things on here and just so people can visualize even without seeing it. You've got some, ethics, and principled kind of statements. But then you get into an AI decision tree, like when is the a guide to how to incorporate AI or if you should incorporate it into your work, as well as a reimagining of Bloom's taxonomy which is really like instructional design love language, Bloom's taxonomy, there's, we've got a few of them and that's one, it's up there. And so I appreciated how you wrap that into things. So just to give people a little bit of a landscape of that, but I wanted to talk about, as we're all dealing with AI at our respective institutions, and we're, John and I are both involved with various conversations around that. How did this come about? Was this in general, like where was the impetus for this? Is this something within eCampus or was this a kind of a provost said, you must do this, or we'd love for you to do this? Or were the faculty rising up and saying, give us AI guidance, or how did this all happen? [00:10:33] Karen Watté: Yeah, that's a great question. I think back in winter of 23, we realized at that point that we were just really dealing with a situation that It was like none other we had ever seen before, this, here's this digital tool. It's just exploding in capability and faster than anything that we had seen before. And like many institutions, I think we had. Sessions, talking sessions with faculty where we introduce them to this idea. We wanted to have discussions with them. And certainly there was a lot of curiosity out there, but there was also a lot of fear. And so I know that in the early spring, we had actually had at least one program leader who said, we're waiting for Ecampus to figure this out. And so there was some real pressure there. But I think I, I knew at that point and after having a number of conversations that we were going to have individual faculty coming to us very soon with a lot of questions about, what does this mean? What are the implications of these tools? Should I put them in my class? How can I avoid my students using them? And so, I, at that point, I, I basically say we've got to, we've got two things we have to do "and we have to do it very quickly. Number one, we have to figure out what is the eCampus stance on these tools, because clearly, we were not getting a lot of guidance from any other tool. Location. The university did have a small task force and I was on that task force, and we were, looking at what was happening, but there wasn't real action happening in terms of how to, how are we going to support our faculty going into the next year. And so, number one, we had to figure that out. And then number two, we needed to get some resources in place because we were going to be providing training and support all through the summer and into the fall for faculty who are trying to grapple with this. And so that's really where that came from. And at that point, I said, okay, let's we've got a lot of really great thinkers here on this team. A lot of people have done a lot of innovative stuff. I know we have a lot of folks who were very interested in it on the eCampus team. And so, I handpicked 12 people based on their diverse backgrounds and what they were interested in. And I said, you are our AI council, and these are the three things we're going to do. We're going to figure out what eCampus thinks about these tools. And we're going to make a stand on it or take a stand on it. And then we're going to secondly, we're going to figure out some kind of taxonomy that will allow us to identify what AI skills are needed. And I had through some other conversations been inspired to think about it in that way. And then finally, third, I needed some practical. strategies. We needed like a library of strategies that our instructional designers could pull upon as they had questions from faculty. So that's really where it came from organically as we were having conversations and knowing that there was this sense of urgency that we needed to get our house in order so we could help faculty who are going to be coming at us all through this summer. [00:13:45] John Nash: The, the tool Page and I'm looking at it now is number of reasons from my perspective and that is you start with an ethics statement, but then it follows with some principles, and principle number one of seven is be student-centered. Now, when Jason and I, and maybe you are hanging around having coffee, this seems obvious to us, I'm sure that this ought to be number one, but it's not actually for most people maybe. Maybe I'm stretching. It's not for many people, and at our institution, and as I work also with P 12 schools around how leaders are going to articulate guidelines for AI aren't always first thinking about student centered -that's more administrative, or it's a lockdown attitude, or it's a integrity issue. Can you talk with us a little bit about the conversations you may have had and why being student centered is number one on the principles. [00:14:38] Karen Watté: When we were trying to decide, what did we need to do first? And that was, establish this ethical foundation. What are we going to say we stand for and what's important to us? And, Forever, eCampus has always been student centered. So, when we've talked about what, what's important to us when we're evaluating these tools and whether we should use them, we went back to OSU values, but also our eCampus values, which articulate that, the student comes first. We do things for the student. So that seemed like just a natural. A natural piece to bring over as one of the principles that we're going to abide by when we're looking at these tools. The other principle I think that is very important there on the list is that last one, which was accountability because I think that kind of wraps up the fact that, regardless of whether you're using AI, the human author is ultimately responsible. So, there's all these other issues that we have to, that we want to consider, but we also want to ensure accountability for everything that's being produced here. [00:15:41] Jason Johnston: And just to read that one, it says, establish this number seven, establish accountability, regardless of how or whether AI is used, emphasize that the human author is accountable for all content produced. [00:15:54] John Nash: Yeah, that's key. Involved with a generation of a document that's going to help our faculty have productive and developmental conversations about their distribution of effort. going to actually work on your teaching research and service and then relied a little bit on AI to help us brainstorm through some of these conversations that turn up very transactional document into something that's more of a developmental conversation. And yeah, we placed a statement at the, in the end notes about how it was used, but then also that we stand by the facts in the document as and contributors. [00:16:28] Karen Watté: Yeah, so important. [00:16:29] Jason Johnston: Yeah. And your number two talks about demonstrating transparency again, along with that, if it's being used and integrated recommending that faculty are clear in the syllabus that such tools would be used and that's another place we've been talking a lot with our faculty. Faculty about, which is that transparency, both on the faculty side, but also on the student side to creating a space in which things are transparent. And I think one of the outcomes of that is that you create a more trusting environment. Along with that, I noticed you don't say anything about AI detectors here on your list. There's no number eight thou shalt use AI detectors or thou shalt not use AI detectors. Do you have any things that you would are willing to put on the record about AI detectors? [00:17:15] Karen Watté: We haven't been impressed so far. I'll just say that. I think there is a lot of a lot of information out there pointing to the fact that they don't do the type of job that they should be doing, or that they claim to do. And the, often the bias that seems to come out in their results is very disturbing. So, at OSU we have stayed away from that. That is not the direction we want to go this time, [00:17:44] Jason Johnston: yeah, we've talked about just how Again, we're good. If you listen to this podcast This will be the fourth time you've heard this, maybe fifth, but about Michelle Miller talks about same side pedagogy and about within the classroom. What are we building together with an AI detector? Are we building a community of trust and co learning together or are we building a community of distrust, and separation between the student and the teacher. And I think the, it's a rhetorical question, the way I phrased it, but I think we know the answer to that, which is, AI detectors do not help with same side pedagogy, putting us on the same side as the students, right? [00:18:27] Karen Watté: And I think really, I would emphasize just really the inaccuracy of these. And I was just reading some information and from some R1 institutions that have done a little bit of testing in house and they, these AI detectors just don't measure up to what they claim they can do. So just best to avoid them for now. It's not something you want to get into. [00:18:51] John Nash: For me, it's almost as though your first principle of being student centered suggests that the AI detectors aren't necessary. That if you're being student centered, doing as Dr. Miller at Northern Arizona says, having a same side pedagogy, not an adversarial for learning, then you're going to be okay. [00:19:11] Karen Watté: Yes. Yeah. [00:19:12] Jason Johnston: So, I had mentioned this before, we looked at your decision tree, here at UT, we as we were trying to work as a team to figure out when and when not to use it in our own work, and then when we recommend it as we were talking with faculty because I'm in kind of the same sort of position that you are in terms of working with course production, but also doing professional development with faculty. [00:19:38] Jason Johnston: It seems like a lot of work to have gotten to this place in terms of decision tree. Did it come easily as you were going through things where did you base it on some other previous kind of work that you had been doing around, even just the implementation of technology, because I think there's some overlap here, or how did this specifically come about? [00:20:01] Karen Watté: I think all of the hard work and conversation around what our values and principles will be really led naturally into the creation of that decision tree, because you can see each branch correlates very closely with many of the principles that we identified. So, in that respect, that piece of it was easy, but of course it was vetted numerous times among the small work group that created it. And then with the larger council. And we added that very first question toward the end of creating that, which is, we must check with the department and the program first. That is always that the first step, does the department or the program have a policy in place? At the time that we were creating this, Okay. Very few had any policies in place. They were still in, conversation, but I think that will be changing over time. So, we'll check there. And then the second one, of course, we're very student centered. So, the second question is how does this how would this impact your pedagogy? How does this lead to better outcomes? What is the impact to students? And if you can articulate that well, and it makes sense, then you continue on down through that tree. But those first two questions are critical. If you can't get past that then you're not gonna, you should stop at that point, essentially. [00:21:18] Jason Johnston: Yeah. [00:21:19] John Nash: The decision tree. For those who are not looking at it right now, it's a guide that was developed by your unit to help decide when and how to incorporate You AI into your work. it's aimed at the teacher or the instructor. Is that fair? Or could it also be for an administrator, an associate dean, or someone who's thinking about using it for non-instructional purposes? [00:21:44] Karen Watté: Yeah, that's a good question. I think it certainly could be repurposed. When we were creating it, of course, it was meant as a guide for our staff and for faculty who are working on course development, but certainly many of those questions are very applicable. If you're looking at AI to improve a business process at the university, you may want to review some of those kinds of questions. So, I think it certainly could be applicable to other questions, other spaces. [00:22:13] John Nash: Have you been approached as a unit, from folks who are looking to, as you advise here, when an answer is no, your recommendation is to pause and seek consultation, and then with an asterisk, you note that would be consulting a supervisor or other person who can provide expertise. When I think about, for instance, my department. We don't have a policy in the unit. I would consult my chair. They would shrug their shoulders. I might look inside my college. They would also likewise shrug their shoulders and say, and I think this might actually escalate up to maybe our center for learning and teaching or something like that. Are you seeing similar things and how is this playing all the way down to the unit in terms of people's capacity to look at these questions? [00:22:57] Karen Watté: Yeah, we've used it in a few different contexts. So, for example, a faculty came to us and wanted to create some AI supported materials for their course development. The first question was back to the department and the department at that moment said, absolutely not. You're not going to do that. So that was the end of that. But then we've had another situation where we had a faculty who came, and they said we would like some graphics created to support this particular concept. And by the way, it's okay if we look at AI image generators to help support this piece. And so, then we had a conversation. within our team and specifically with our videographer who is helping to pull some of this these images together around, okay, what are the concerns? And let's look at the limbs of this tree that are most applicable here, which of course would be copyright. How are we, certain that there's not a copyright issue if you use this particular engine to develop a few images to support this particular learning object. And so, we were able to clear those hurdles, but this decision tree gave us that sort of framework for the conversation and to ask those kinds of questions. And so, I think those are a couple examples of where I think it was useful. [00:24:14] John Nash: Those are great examples, because I think that a tree like this, it really is less about being a dictatorial policy, but rather a driver to engender conversation around what people want to accomplish, Yeah. [00:24:28] Jason Johnston: You'd mentioned your media team. Have you found some very, and you've got a pretty large team. Have you found a variety of opinions in terms of the use of AI within your own team? You don't have to name names on the podcast. Or have people tended to get behind the same horse on this one? [00:24:49] Karen Watté: Generally, I think we have a pretty innovative group of people. So, they've been quite open to it and all of that. Although, I will say that we have a couple of instructional designers who are particularly concerned about privacy issues when it comes to using these and copyright and all of that, which rightfully, you know, rightfully and that's and so we've had conversations around that. That component. They're not quite as excited to start experimenting and putting things up into these systems, which totally makes sense. But otherwise, I would say we're probably a lot more willing to get out there and try things just because of the nature of what we do every day. [00:25:32] Jason Johnston: That's impressive that you've been at this for a little while here at Oregon State and that you continue to be innovative. Only because it feels and please correct me if I'm wrong on this one, but it feels our institutions of higher learning our land grant established longstanding institutions don't tend to go that way all the time. They tend to maybe favor the more traditional. And so how do you think that you've kept this going if you've been early adopters when it comes to online, and you continue to innovate forward? [00:26:06] Karen Watté: I think it's just the culture of the unit. Essentially, it started out as this little skunk works area. We were trying things that no one else would try, and so the university continues to turn to us to do those kinds of experiments when it comes to teaching and learning and then we're hiring people that have that same mindset. And we're telling them it's okay to take a risk. It's okay to try something. And if you fail, that's all right, because we're learning, I think it's just the culture and maintaining that momentum about innovative but innovating in a careful way. We are, Of course, research based. Much of what we do we experiment with it when we find that there's a research basis for it. It's not just the Wild West, so in that regard, we, we value research just as much as the faculty, the rest of the faculty at the university, but we do try to push and experiment with new things when we think that there's a valid reason to do. [00:27:05] Jason Johnston: so, it's been about maybe seven months at the, at this recording since you put these out which is like 20 years in AI years, I think, right? Is there a calculation for that yet, John? [00:27:15] John Nash: but. Could take a dog years times cat years and divided by Moore's law. I think we'll get somewhere in the ballpark of that. [00:27:26] Jason Johnston: Yeah, exactly. I think that sounds, we'll work on that, and we'll get the we'll put the, like everything else, put the formula in the show notes. [00:27:33] John Nash: Yes, [00:27:33] Jason Johnston: John? [00:27:33] John Nash: I was going to ask Bard, but I can't anymore because Bard is now called Gemini. [00:27:38] Jason Johnston: Yes. We'll ask I've got the advance. Anyways, that's a whole other conversation. So, we'll talk later. Anyways, back to the question. Since the seven months has gone by, first, is there anything that you would change about what you put out there from before? [00:27:54] Karen Watté: I think, we had made it very clear that what we put out there was really a snapshot in time, that this is what we see today, particularly around that, that Bloom's taxonomy one. This is AI capabilities as they are in the summer of 23. So, we pretty much knew that, We're going to have to revisit this, in a year or sooner, and I, and we will be reconvening our AI council in the spring, to start thinking about, what may need to change, but certainly that tool will have to be looked at again, the decision tree, I think, still probably stands as it is. I don't anticipate there will be a lot of change. But again, this is a conversation we're planning to have here very soon. [00:28:38] Jason Johnston: Are there other ways that you think that you might expand? Like, what are some of the other gaps that that you're seeing that you would like to help with at your university? [00:28:46] Karen Watté: Yeah. This fall we had some conversations around helping program leads, department chairs, anyone in a kind of leadership position facilitate conversations around AI. And one of my colleagues, Dr. Katherine McAlvich worked up a short guide, but she calls it a readiness playbook for department chairs. That's actually posted out there on our website as well. It's about a five page document just to give some starting prompts. So, to encourage them to start speaking with faculty if they haven't already, started that conversation. Because I think we're getting to a point at, very soon that we're going to see some need for curriculum updates based around this. I'm starting to see case studies about industries and how they're integrating it into the, into work. And so that means that what we teach at the university or at any institution is going to soon have to reflect what the reality is out in the workforce. So, I think those conversations, trying to encourage that and get folks to talk about that is probably the next step. [00:29:51] Jason Johnston: We look forward to more updates. Yeah, we'll be watching that. Thank you for being open handed. We're having conversations here about what goes on the web and what doesn't. And we strongly advocate for sharing resources on the web for others to. To be able to see because they're helpful, and we've been helped by yours, so thank you for that. [00:30:12] Karen Watté: You're welcome. And this is a topic that not one institution can answer, can manage alone. It is such a huge undertaking. We look to all of our colleagues too for help, guidance, and ideas around this topic because it's certainly a collaborative effort. It has to be. It's. It's just something that's so unusual at this time. [00:30:35] John Nash: Can. We, pivot away from AI a little bit and talk about learners? [00:30:39] Jason Johnston: I guess so, John. [00:30:41] John Nash: It turns out we didn't mean to, but about half our talk is about AI. Then the other half is actually about learners, I think... But yeah. You did an Interview in 2017 for the Oregon State Ecampus News, and you were asked what your best piece of advice for instructors was, and you said, "Be sure to let your personality come through in your online course. Communicate regularly with your students and provide them with timely feedback. Your interaction with your students is the most important part of the student's online experience." And it feels like that advice never gets old but feels fresh to some. Can you just say a little bit more about why this wisdom is so important? [00:31:23] Karen Watté: Yeah, but we survey our Ecampus students every year, and it's interesting to note that even to this day, they continue to say that the number one indicator of their satisfaction in an online course is the interaction that they have with their instructor. So, I would say that our data continues to bear that out year after year. So, instructor presence is just absolutely critical in an online class. And now you even see this reflected in, the Department of Ed's requirements around regular and substantive interaction, which a lot of folks have spent time thinking about as well. [00:32:02] John Nash: that first part of your response, which was be sure to let your personality come through. What is some advice that you have for teachers who are thinking about upping their game in that area? [00:32:14] Karen Watté: We of course love to try to get them on video if we can, at least an intro video in every course. We love to have them do, video overviews if they're willing to for each activity. But then, even if they're not able to do that or willing to do that, they can Just infusing their actual personality and their passion for the subject into the announcements that they make, into the content that they're delivering to the students. So, we really try to work on helping each faculty bring out the best and put their personality into a course. [00:32:48] John Nash: fantastic. we see That more and more. I know in a recent episode, we had the privilege to record a session with Johns Hopkins University's Symposium on Online and their speaker for that symposium was Flower Darby, and she was very clear about letting your personality come through in your course. And so, it feels like yellow Volkswagen theory. Once you buy a yellow Volkswagen, then all you see on the road are yellow Volkswagens. And so, once you start talking about letting your personality come through in your course, you start picking up on it every time someone says something about it. But yeah, that's it's really good advice. [00:33:24] Jason Johnston: And that feels like a good Oregon thing, too, right? Yellow Volkswagens. You have a lot of yellow Volkswagens out there. Is that another stereotype that I have about Oregon? [00:33:32] Karen Watté: We've got some on the road. [00:33:33] Jason Johnston: I've got a few. Got a few. Yeah. Yeah. And along with that too, one of our themes here is talking about how do we humanize online learning, right? As John always eloquently introduces us, you know, we've done a lot of things great. And some of it, not so much. And I think one of our places that we want to grow in this next season of online life, now that we've, we've, we can get content to people, we figured that one out, right? We figured that one out a long time ago. Now we're learning to maybe make it a little bit more interesting and interactive. But how do we humanize, you know, and I really like that about making sure that Personality comes through in your online course as part of that are there other ways that you as a group or in your professional development or in your course production process that you help faculty to really humanize their online courses? [00:34:25] Karen Watté: Yeah, that's a great question. I, and I think a lot of that kind of comes down to just ensuring that you're explicitly designing in opportunities for engagement, because, unlike an on-campus course where it's a natural, you have that natural opportunity online. It has to be designed in. And so, as you're designing that in, you're thinking about, is that channel easily accessible to students? Is it easy for the faculty to use? Is it easy to manage while you're teaching that course? The kind of communication that would allow you to connect easily with your students. What does the feedback look like in the class? What's the pacing and how can you, do you have enough time to provide the kind of feedback that you'd like to provide so students feel like they're really having a good learning experience and connecting with you? So ultimately, it's, I think a lot of this is also just having to be built in into the course through the course development process in the conversations that the instructional designer is having with the faculty as they're talking about what is this course going to look like when it's actually being taught. [00:35:34] Jason Johnston: Mmhmm. Yeah, this symposium that John had mentioned, there's a bit of a common thread. One of them was talking about intentionality, ..And that's one thing I really like about course design, instructional design, and the process, so that, we just don't expect faculty Just to arrive in their online course and just everything to be there and just to work We shouldn't know side note We shouldn't expect this in their face to face classes either, but there's not always a lot of concentration on that However, we're talking about online here. But I think that there's an intentionality about design that I love, and I think that if we can take a step back and think about what it is. We're intentionally trying to do here. We can really move the needle. [00:36:17] Karen Watté: Absolutely. Yeah, it's really thinking ahead And, the lovely thing is that we ask that online courses be entirely developed prior to the actual launch of the class. So, we're not developing them on the fly as the course is underway. And I think that really lends itself to some thoughtful kinds of activities,
EP 24 - I Cancelled My Midjourney Account - The Great Big Fat AI Ethics Episode
Feb 19 2024
EP 24 - I Cancelled My Midjourney Account - The Great Big Fat AI Ethics Episode
In this episode, John and Jason talk about the ethics of AI, including how ethics are formed and a few scenarios like if it’s ethical to use Midjourney. Listen in to find out who says no! See complete notes and transcripts at www.onlinelearningpodcast.com Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)* Links and Resources: Article: Harvard Business Review Ethics in the Age of AI Series: Part 1, Part 2, and Part 3Article: It's Not Like a Calculator, so What Is the Relationship between Learners and Generative Artificial Intelligence?Jason’s FAFSA Assistant GPT”Right Choices: Ethics of AI in Education” - John hosts Jason in an episode of the School Leadership + Generative AI seriesJohn’s School Leader AI Bootcamp Transcript We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions! Podcast Episode on AI Ethics - January 29, 2024 False Start [00:00:00] John Nash: Should we do the intro? [00:00:01] Jason Johnston: Yeah, let's do the intro. [00:00:03] John Nash: I'm John Nash here with Jason Johnston. [00:00:06] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning Podcast. The Online Learning Podcast. Let's try it again. [00:00:12] John Nash: I'm John Nash here with Jason Johnston. [00:00:14] Jason Johnston: That reminded me of do you ever watch The Office? My name is Kevin, because that's my name. My name is Kevin, because that's my name. So this is the Online Learning Podcast, the Online Learning Podcast. Episode [00:00:30] John Nash: I'm John Nash here with Jason Johnston. [00:00:32] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the Online Learning Podcast. [00:00:38] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last couple of years about online education. Look, online learning's had its chance to be great, and some of it is, but still a lot of it isn't. How are we going to get to the next stage, Jason? [00:00:52] Jason Johnston: That is a great question. Why don't we do a podcast and talk about it? [00:00:56] John Nash: That's perfect. What do you want to talk about today? [00:00:59] Jason Johnston: John, I've got some ethical questions for you. [00:01:02] John Nash: You do? [00:01:03] Jason Johnston: I've been wondering about the ethics of using AI for certain tasks. And maybe we'll get back to some specifics later on. But how do we form our ethics to begin with when it comes to AI and using AI these days when we think about education? [00:01:19] John Nash: I'm stealing your line from the intro. That is a great question. How do we form our ethics? I think they're formed by the values and the beliefs we bring to anything we do. You've had a longer background and thinking and considering about ethics, both in your professional life and your education life. What do you think about in terms of what sensibilities people bring to any task? [00:01:45] Jason Johnston: Yeah, I think so. I like where you started there because sometimes people start externally. They think ethics are clear, right? We're not supposed to steal people's cars and we're not supposed to, kill people when we walk in front of them or whatever. And, but it's not that clear when it comes to certain things. Certainly we can follow the ethics of a country or a city or institution, AI is something new. We haven't dealt with some of these questions before. And because of that, it does take some ethical reasoning. I happened to talk to a number of PhD students taking an instructional systems design course. I was asked to come in by one of our previous guests, Dr. Anilda Romero Hall, and to talk about ethics in instructional design. And where I started with that was this question of what do we bring to the table? If we can understand what forms our ethics, our beliefs, our positionality to begin with, then we can start to understand why we might have some knee jerk reactions to certain things. [00:02:49] Jason Johnston: And we might be more willing to concede on some things for the sake of the common good. And as we talk about ethics within a context or within a a group of people or a community or what have you. [00:03:02] John Nash: Do you think the ethics of the companies that are creating these models drive how people feel ethically about using them, or is it the other way around? Did the companies decide they needed to sound ethical because they knew people were going to clamor about whether these models might be used in unethical ways? [00:03:26] Jason Johnston: Yeah, this is a great question. Yeah, it feels like, to me they're aspects, if I'm reading down, like, and they've all got them, right? So you can look these up OpenAI, IBM, Anthropic. If you start to read down those ethics, typically you re, resonate with a lot of those ethics. They're good things, typically, about security And inclusivity and being non biased and private and so on, but then you've got to ask yourself what is really driving these companies to do what they do and what is not being said, right? What's between the lines here and what are missing? And this is where I think we need to go beyond what the companies are saying and think ethically about our own context. As educational institutions, I don't think we can just rely on these, do you think we can rely on these ethics to help guide our use of AI? Are they good enough, John? [00:04:19] John Nash: we rely on them? [00:04:21] Jason Johnston: Yes. [00:04:22] John Nash: To what extent? I think, of course, they're a good start. They're a start. I think maybe even good gets left off of that last statement. They're a start. They're certainly not unethical, what's been put out there. I don't think that, But the companies are no fools. They know that they're for profit companies and if they were to put out statements around ethics that didn't seem to meet with what general morally accepted principles look like, they would be derided in the marketplace. [00:04:50] Jason Johnston: So do you think these ethical guidelines are crafted by philosophers within their midst or marketing people within their midst? [00:04:58] John Nash: Certainly, I think it's more of the latter than the former. Many of them are Bay Area companies and there's ethos of the Bay Area and these guys and how they think. I think they probably want to be ethical. Google once infamously now said, "do no evil." And then of course later got into many different kinds of arrangements that were not unevil. [00:05:19] Jason Johnston: Yeah, you'd sent me an article a little while ago in the Harvard Business Review. They had a AI ethics series that I can put the links into the show notes here and where they looked at avoiding the ethical nightmares of emerging technology and questions about AI responsibility. And one of the questions was, what does the tech industry value? And it looked at some of the ideologies around the culture of speed. And so I think my question with some of these, if you look at it, any of these big companies, Google, IBM, Anthropic with Claude, OpenAI, they have a list of ethics, but I think we always have to ask the question, what's not there, that's driving them. And I think this is one of those, is this culture of speed and the fact that it almost seems like their guiding point is that we need to do this as quickly as possible and get out there in front of other people. And, and that guides them ethically in terms of the choices that they make. [00:06:22] John Nash: I agree with you. I think that they have two books of ethics, maybe, almost as though like a business that's got a second set of books. And so they've got the public ethics around keeping people safe and data safe and responses of our machines, that are very human like in their responses, the responses are safe. And then the other set of ethical books say we need to move on this like our board members want because shareholder value. [00:06:52] Jason Johnston: Yeah. Yeah. And because of that, they may be willing to let some of those guardrails down a little bit to allow for the speed. And some of these post humanists or transhumanists kind of people that are running a lot of these companies think about the, from an ethical standpoint, , they're taking a more of a teleological approach, which is just looking at this ends justify the means. If in my mind, this is going to improve society so radically that we're willing to let a few things slide here along the way. And I think that's where the speed comes in, is that if we can get there quicker, and we can improve society sooner, then we're willing to let a few, little ethical oversights go by while we're building whatever it is we're building. [00:07:42] John Nash: Yes, because if you take what Mark Andreessen recently said, there is a belief amongst some of these founders that they are actually saving the world, that these are technologies that are going to save humans. [00:07:56] Jason Johnston: I resonate with that idea of there being two books and we got to ask what the closed book, the secret book of ethics is, and what the open book of ethics is. The open book of ethics is almost always now talks about safety and inclusivity and privacy and these kind of things, whereas the closed book probably more things like like speed about having a perception of what the public needs in order to adopt it versus it actually being there. So, managing basically your market and managing what the market perception is of a particular thing is more important in these cases than the actual thing itself. [00:08:44] John Nash: Yeah, or what problem the thing is solving. We've not been privy to the real internal discussions at say Open AI when they said we will publicly release 3.5. I don't know what the problem was that they saw was being solved in the marketplace by releasing this. [00:09:01] Jason Johnston: Right? [00:09:01] John Nash: I don't know that there was one exactly, except that it's just, it's a fascinating technology and fun to play with and mind blowing. But that's about it. And yet they were able to monetize that because people wanted to, play with it and actually do work with it. Yeah, I don't, I think this was all, these were all products with solutions in search of a problem. [00:09:21] Jason Johnston: Yeah, it's strange. And this is what makes it really unlike a lot of other inventions. And I think because it's so open ended, it's so user driven, [00:09:30] John Nash: Yes. [00:09:31] Jason Johnston: And inquiry based that it doesn't need to be a solution to any one problem. That it's like an open ended potential solution, [00:09:41] John Nash: Yeah, unlike the sundial, or the scientific calculator, or the phonograph, or the chalkboard, or go on. Yeah. [00:10:05] Jason Johnston: On paper and in their heads, but you're right. We continue to press math forward. However, again, here's a piece of technology, just like you mentioned there, with a very specific use in mind, right? And it has certain limitations to it. I think AI is more like the internet where it's wide open or as a recent article I read said : it's more like electricity somebody told me this about their about their grandfather that lived in Eastern Kentucky, didn't have electricity and they're asking do you want us to run the lines out to your home? And he's like, why would I need lines? I don't have anything to run on the electricity. Which is true, right? It's an absolutely true statement. But electricity was almost like a, solution without a problem because as soon as you got it, then you figured out ways to use it. [00:10:51] John Nash: I've been wrestling in my head whether or not this is like a utility. I don't think it's necessarily a public good. And, but it is, people are paying for it like it's a utility, they pay a monthly fee, they pay for their electricity, they pay for access to chat, GPT 4. 0. And so, but is it in doing so, is it just creating a situation where people need to get a bunch of stuff or do things that they didn't necessarily need? [00:11:19] Jason Johnston: Yeah, I think my own use of it is probably a mixed bag. I sometimes come away and it feels like I've been on the internet and didn't get anywhere and then sometimes you go on the internet and you get some places, right? [00:11:30] John Nash: Right. [00:11:30] Jason Johnston: And you find the answers that you need or, sometimes you get lost and, a string of cat videos and you don't know how you got there. And I feel like because it has such a lack of focus, There's a lot of experimenting still to be done with it that doesn't necessarily give you helpful results for your time investment. [00:11:50] John Nash: What do you think about the ethics of all of the little GPTs that are getting built in the marketplace? Some of them are completely frivolous, some of them are a little malevolent, others could be useful. Do you think that the people who create a little GPT also need to have an ethical code? [00:12:12] Jason Johnston: Yeah, that's a great question. I think, and this could lead into some other discussions about more contextual ethics. I do think that one can rely a lot on whatever the bigger ethics are in the system that you find yourself in, or the community, or the organization, or the country. So they can rely a lot on those larger ethics, but typically those larger ethics are general enough that they cannot always be helpful to guide what you should and shouldn't do in the specifics. Does that make sense? [00:12:51] John Nash: think so. [00:12:53] Jason Johnston: So like, maybe somebody running a little GPT might be generally guided by a care ethic, or ethic of how this might respond about certain races or stereotypes or people or whatever. I think it behooves the person who's making that to ensure that's true and do enough testing and to think about enough use cases that it might be used to get around these kind of general ethics to help guide it to keep it on track. I really think a lot of people don't really start with ethics. When it comes to developing these things, I think it starts a lot with innovation, which is okay. I understand that, they're trying to, like you said, solve a problem. I've got a, this is a good time to plug my own GPTs, so people can ,use them. And I don't know, is this some sort of pyramid scheme? If I get people to use my GPTs or make GPTs, do I make money off of their GPTs? [00:13:47] John Nash: Yeah yeah, no, I don't think so. But I think you should, if you'd like, I would pose that to OpenAI to see if [00:13:54] Jason Johnston: really I'm trying to find certain solutions, so I made a GPT because I've got questions, my kids are coming of age, and I've got FAFSA questions, and so I made a FAFSA GPT that is trained specifically on the information from the government so that it could answer questions from a reliable source. And I think it was helpful for me personally. And so maybe it'd be helpful for other people, but honestly, I didn't really necessarily think of the ethics of that. It was just a utility. [00:14:27] John Nash: You did think about the ethics tacitly because you wouldn't punk your kids on the FAFSA GPT. [00:14:35] Jason Johnston: that's true. And I said things like, yeah, I think there would be maybe some specific ethics that we know, for instance, the, of the many qualities that GPT had, especially in the beginning, we still know that it can be very confidently wrong, right? And a lot of the other things it's grown away from, but it still can be very confidently wrong about certain things and it can hallucinate and so on. And so I told it specifically to only give truthful answers. If it doesn't know, then say it doesn't know, and those kind of things. Whether or not that works, I don't know. Sometimes it does, I think, sometimes it doesn't. But by guiding it to, only use these resources, bang, then hopefully it will provide what I was hoping for was a truthful answering of my questions for myself and hopefully for other people, so people wouldn't get steered wrong. So I guess you're right, yeah. [00:15:23] John Nash: Yeah. So what do you think our advice is for teachers as they think about how they might integrate ChatGPT, Claude, other large language models into their work routines, either as an instructional design assistant, which I use them for a lot. I use it more that way than I do as a tool for my students to solve a problem, for, students doing their work, or some hybrid of both, what if we're thinking about our notion of being human centered in our work, and encouraging others to be that way, what do you think we should say? [00:16:06] Jason Johnston: Yeah, that's a great question. I would say on the front end that whatever institution or community that you're in that we should be at the place where people should have some pretty clear ethical guidelines to help guide as a community things that some principles that were agreed upon from a number of stakeholders across the community, institution, whatever that could be more general. Like, I was very thankful to be part of the a committee that developed some of these principles at UT, which can be really guiding principles. And so there are things like "we use AI intentionally, it's human centered, it's inclusive, it's open and transparent, we engage with it critically" and so on. But then what I found when I'm working with my media team and my instructional designers, as we're talking about the use within our day to day work, I found that these guidelines were good, overarching guidelines for us that we could all agree upon. But then it came down to really specific kind of questions that we needed to talk about. For instance, do we use AI image generators, right? And if we do which ones do we use? Do we open handedly use them? Do we just use specific ones? Are we concerned about things like copyright? Are we concerned beyond copyright? What other questions do we have in our smaller community? Questions that didn't even come up around faculty around creative works, not just about whether or not copyright is taken care of, but is there work creep happening when this person who's not a graphic designer uses AI to create graphics where another human would have typically done that, right? And so it starts to create much more of a specific kind of context for principles. And we were able to come up and we're still working on some more guiding principles , that can help inform our day to day work within our team. [00:18:02] John Nash: Yeah, the graphic example is great because if you've got graphic designers, illustrators on your team, they take a brief from a client and they have to interpret that contextually and then they create an illustration, let's say. If they or someone else uses an image, generation model like DALI or MIDJOURNEY, they put in a prompt and it puts out something technically beautiful and maybe aesthetic, but does it hit the mark in terms of what the contextual interpretation was that was desired by the call? That's very different. And if that can be created and say it does hit the mark and it's created by someone who's an 18-year-old intern, let's say that you hire, you have a new power dynamic problem. If we're, now we're back to my original problem, right? [00:18:49] Jason Johnston: Yeah. [00:18:49] John Nash: you are usurping traditional power dynamics about who's supposed to do what. [00:18:55] Jason Johnston: And that's where it becomes so contextual, right? Because as you said, yeah there's a lot of ethical ways that you can talk about this, right? There's the copyright part of things. You can just lay it aside and say we're not gonna cross copyright laws and so we're just not gonna do it at this point or whatever. But there are other ethical considerations beyond that someone's livelihood, potentially there could be some power dynamics, there could be some lack of care and respect for people who have done this job for a lifetime, and they're trained to do this, and they have the tools, and then all of a sudden some idiot with a Midjourney account [00:19:28] John Nash: Yeah. [00:19:30] Jason Johnston: that they can make graphics better than they do, and it's just not, it's not kind, and so I think that there are many ways to do that. Now, there could be another situation where somebody has a one person shop, and they're doing tech, they're doing instructional design, they're doing a little teaching and professional development, and they're expected to do graphics on top of this, and they don't have the budget. They've been told you can't hire anybody else. You don't have the budget, whatever. , it may be in those situations that that the ethical thing to do could, be to go ahead and use those, um, graphics . [00:20:00] John Nash: you've hit the nail on the head. Context is everything. Because you're right. If you're a solopreneur who, say, makes logos for a living, then you are doing client development, you're doing billing, invoicing, and you're doing the creative work. I think you're probably using LLMs and image generation models all day long to help manage that process. But that's different from a general ethic of care for just understanding how to deal with humans in the context of an organization, and whether you usurp their work without talking to them. [00:20:32] Jason Johnston: Yeah. Yeah. Let's do one other thought experiment here. What if two, I'll do two thought experiments. [00:20:38] John Nash: A a 20 year old junior at university uses the LLM to critically examine the assignments given to them by a professor and writes back giving them a critique on how it doesn't really help them achieve the learning goals intended for the course. Or a parent decides to write the lesson plans for an English composition, 10th grade teacher. This sort of power still sits there. And so could a teacher's aide do the design work for a course instead of the teacher? Or should they? I think those are, leadership questions. Those are ethical questions. Those are organizational culture questions. [00:21:18] Jason Johnston: Yeah. , I liked how your sentence changed there because this is a great indicator that we're doing some moral reasoning. A great indicator that we're doing some moral reasoning, is when your question shifts from could to should, right? And so could that parent do that? Yeah, certainly can. Everything's there. Should they? That is the ethical question, and I think that takes some reflection. Probably takes some conversation, perhaps even to be able to work in empathy with other people. And so if we're trying to follow an ethic of care, then empathy is pretty high up there in terms of understanding. And I'll be honest, and I, and this is also this completely contextual. I'm not saying anybody else should do this especially, present company, but I canceled my Midjourney subscription. I, hands down, it's making the best AI images out there, without question. It was worth it to me from that standpoint, and so on. But I canceled it because of some of these conversations I was having with creatives, and it didn't feel good anymore to have it, [00:22:32] John Nash: Say more. In what context? Like, would you stop using DALL-E now? You could still make images with MidJourney without a subscription, right? And so, even if you can't, but I'm just curious, like, so you would never use it under any circumstances now? I guess is what I'm trying to understand. [00:22:51] Jason Johnston: in my current context the things that tipped me over, it was the, some of the copyright issues in terms of using artists work without their payment or their knowledge. Which didn't feel good to artists in general. The fact that I was paying for it as well, so somebody's making some bank off of this, right? And so it's not experimental. This is a business. And then really thinking about this idea of why and should I be, right? Why am I doing this? Do I really need to be making images of this high quality? If it's important to somebody else that I am doing this, is it that important to me that I'm doing it? So that's what was my reasoning around it. I'm not saying I would never for any circumstance but I, and partly a little bit of a statement, to be able to say, Oh, yeah, I just decided not to. It was an interesting experiment for a few months. And we have an Adobe Firefly subscription. They have an ethic that includes paying artists and only using works that they have full license to. And. It's not as good, but I'm willing to do that for now if I need to use AI. And to be thinking about if there is anything that somebody should be doing that has the skills, then to be thinking about what place do they have in all this? Should I be giving them opportunity and chance to do this? [00:24:20] John Nash: Fantastic rationale. You've, yeah. You've convinced me I need to think about dropping mine. [00:24:27] Jason Johnston: Again, I believe it's context. I think that people need to think about it for themselves. I'm not going to go around wagging my finger at people via LinkedIn about it. Although I have considered at least putting my thoughts out there. So maybe this will spur me to put my, some of my [00:24:41] John Nash: Well, you know, there's, There's nothing worse than a reformed anybody. [00:24:47] Jason Johnston: That's right. Nobody wants to talk to that person. Yeah. This has been good John. I feel like we've covered a fair bit of ground. We partly started talking about this because we did a video, which we'll also put in there, where you and I broke some general ethics down in about 15 minutes. You invited me to come talk to you, and this is part of a boot camp you're doing as well, tied in with that. Perfect. [00:25:12] John Nash: yeah, you and I had a chat in a series I've launched called School Leadership and Generative AI, all in about 15 minutes where we cover pretty big topics on the top of mind of school leaders, but we get to it as quick as we can so they can gain some ground on some of these bigger issues. I did one with Dr. Kurt Reese on data privacy with students. And then, yeah, with you on ethics and it's yeah, it's connected to my school leadership AI boot camp that I've got on Maven that people can enroll in, put a link to that in the show notes too. But yeah. This was a good conversation today, I think. I think made me rethink some things. Made me really think about context. I was going to say earlier too, maybe this, maybe we fit this into the other part of the conversation. There were some articles six months ago or so, maybe about a firm in China that was going to have its CEO be a generative AI bot, and it was going to run the company. And I don't know where that's landed since, but it made me think, could or should an AI bot run a school district? Could it even run a school? Could we have an AI LLM provost at a university? How difficult are those decisions anyway? That'll rankle some folks for me just even asking, but I think it's interesting to think about because this is the direction these are going. Already with the terrible news of deep fakes that are coming out around Taylor Swift and others, and then with the election coming up with malevolent actors using this, these tools in bad ways. I think we're on the cusp of seeing the same sort of thing happening for leadership in organizations and maybe not malevolently, but it's going to be there. We're going to have avatars that look very real, that'll get past the uncanny valley that will be driven by large language models that sound like they know what they're doing. So I think that we're another level of ethical discussions are coming around how badly do we need all these personnel? [00:27:15] Jason Johnston: Yeah. All of those coming along. I'm convinced more than ever that we need to be thinking ethically about these. We need to be not just thinking about it for ourselves. We're talking about it in our communities, coming up with standards that we can support one another with, and that we bring people, all kinds of people into those circles so that we can think about not just ourselves and those ethics, but how it affects the people around us. [00:27:39] John Nash: Yeah. [00:27:40] Jason Johnston: Yeah, this is good. thank you, John, for this great conversation. And all of you, if you want these show notes, of course, we're at OnlineLearningPodcast. com. And you can check out all of our podcasts there as well as show notes. Yeah. Thanks for listening. And as well, if you have a chance, if you find us on Apple podcast, you can leave us a review, send us a note there. You can always find us on LinkedIn as well. And connect with us there. We've got a community as well as you can just connect with us and we've got the links as well in the show notes for those. [00:28:10] John Nash: Is it ethical for me to say that we found out that the algorithms like it when people go on Apple podcast and rate us and leave a comment? Or is that just stating a fact? Am I just stating a fact without ethical considerations? It's okay to state [00:28:28] Jason Johnston: I think that if it's true, it's ethical. And the fact that we're being transparent about this, we would like you to leave comments, not just for our own egos, but also to help the algorithm so other people can find this podcast. So, yeah, as long as we're being transparent, I think that's ethical, right? All about the algo. Talk to [00:28:49] John Nash: Cool. Talk to you later. [00:28:51] Jason Johnston: you soon. Bye.. [00:28:53] John Nash: Yeah, fun. I'll talk to you soon. [00:28:55] Jason Johnston: Bye.