Logo

The Data Daily

Q&A: The Promise and Pitfalls of Artificial Intelligence and Personalized Learning

Last updated: 11-08-2019

Read original article here

Q&A: The Promise and Pitfalls of Artificial Intelligence and Personalized Learning

The idea of a sophisticated, artificially intelligent program that chooses just the right digital content for a student, at just the right time, can home in on students' strengths and weaknesses and can support or push them is on the educational horizon.

Some companies already claim their ed-tech products do just that—or some version of it—to customize learning. But are AI and personalized learning really the "dynamic duo" that some educators are hoping for?

Andreas Oranje, the general manager of research in research and development at the Educational Testing Service, says the time is right to examine how these technologies are evolving and the implications for K-12 teaching and learning. Educators are right to be excited about the potential of these technologies. But he cautions that educators should also be wary of the potential dark sides around how artificial intelligence is applied and used.

When people talk about artificial intelligence in the education field, they're often referring to machine learning, Oranje said, in which a machine is trained to perform some teaching tasks, typically more quickly and on a larger scale than the human brain can do. But artificial intelligence has the potential to do much more, because true AI technologies are in a state of continuous learning, developing better strategies and tactics as they analyze more data. But as these systems take in more data, they can also become more biased, depending on the input.

Oranje spoke with Education Week about the perils and promise of AI in personalized learning.

There are three ways to think about personalized learning and how machine learning or AI can take on a different role. The most obvious is around guiding learning activities. You do a learning activity, there's some assessment or continuous evaluation, and some kind of algorithm says, "This is what you should do next." It could be around addressing weaknesses, it could be around fortifying strengths and expanding on those, or it could be completing some domain.

The second form is more on the cusp of adaptive and personalized learning. Algorithms guide learning through scaffolding and responsive assessment. If a student struggles with something, there may be hints that pop up, additional tools like dictionaries, calculators, things like that. There's immediate evaluation, sort of the stealth assessment of where the student is, what they're doing, and how the program can help.

The third one is more about feedback. Everyone goes through the same activity or learning experience, but there are different levels of feedback for different students based on where they are. For example, someone who is really struggling with getting sentences together and paragraph writing, you want feedback that focuses on grammar, usage, spelling, and style—the mechanics. Someone who is more advanced, you want to give feedback about discourse or argumentation style.

It will mostly be about data streams that come to the teacher and the teacher delegating some decisions to a system and some not. It's about increased nudges and helps and supports that the learner gets during their learning—look at this, have you considered this?—a lot of the feedback adaptation. I can also see teachers being much more like coaches who spend more time on the difficult, complex problems with smaller groups and individual students. AI technology will afford them the time to do this because a lot of that learning is managed through a system.

As we create a more complex world and look at a broader set of competencies and domains, from social-emotional learning to cross-cultural competency to collaborative problem solving, I can see the teachers focusing more on those aspects. The domains that are better understood are then more relegated to learning systems. I can see a situation where we empower teachers with more data and more evidence about students' performance and how they feel and where they are and where they're coming from to give them a broader and deeper learning experience.

Some of that is about automation and some of it is machine learning. A lot of it gets lumped together. Being able to elucidate and aggregate data at scale. Technology and some machine-learning algorithms may predict what students may struggle with; that kind of information can help teachers know where to go next. It's usually very specific and targeted applications that are a success. Applications that are very broad about predicting scores are usually not as successful as things that are very focused on specific learning objectives.

Where we see this stuff mostly being used so far is in the domain of mathematics, where you have these progressive, clean, discrete topics you work on. It's much more specific as opposed to reading or writing, where it's much harder to predict anything.

It's no surprise that people, including teachers, are fearful about AI taking over jobs, especially if you look at how it's sometimes portrayed in the media. And, it is certainly true that certain tasks will be automated and that jobs that contain mostly or entirely tasks that follow predictable patterns will be at risk. However, teaching is a very complex profession and AI will not be able to automate that much. In fact, I predict that it will lead to an expansion of education, not contraction. Between some automation of standard tasks, but also increased access to education for more people and deeper insights in learning through analytics, I think that the demand for teachers will go up. And, they will be asked more and more to do what they are uniquely qualified to do: instruct, coach, mentor, differentiate, individualize, and inspire.

It's very worrisome. I did a presentation at ISTE [International Society for Technology in Education] that focused on how we can arm teachers with pointed and important questions to [reassure] themselves that whatever is offered is reasonable and works for them. Even if good data has been collected, they may collect data for one population and one purpose that may have nothing to do with what one specific teacher is using it for. Those kinds of things can create biases and can even hamper learning for some students because they're being classified or put in a bucket where they don't belong or that doesn't apply to them. Those are some of the concerns that are most salient.

The thing with AI and machine learning is that these things are still very much human decisions. We're encoding human decisions into a system, so human prejudices and biases are simply encoded into the system. The only difference is that the systems can be applied much more at scale, which means our biases can get exacerbated. There's not always a good emergency brake to say, "Not so fast, have you really tested this out for this population, do you really know what it's doing?"

You want teachers to help guide these algorithms, review things, be able to press on the emergency brake when they clearly know the student is ready for this and the algorithm says something completely different. They can intervene. But also you want these algorithms to help teachers see around their own biases—"hey, your students might be ready for this," and I didn't think that as a teacher, so I'm going to try this. It works both ways. It's that, hand-in-hand, teacher with tools that is a very powerful and good thing.

Published in Print: November 6, 2019, as Personalized Learning: Dynamic Duo or Big Problem?


Read the rest of this article here