Archive for October, 2006

On Evaluation

October 31st, 2006

How do you know if a course is working? One measure might depend on the results of your student assessments. If you think your students are learning at least what you want them to learn, then the course is working. Another measure is whether or not the students actually stay with the class. While this is less an issue in pK-12 than higher ed — school kids don’t really have much say in whether or not they go to school — it’s still one measure of success once one starts teaching high school and college. A third has to do with the load on the teacher, that is, are you getting the kinds of outcomes you want based on the amount of work you have to do? You can probably get good results with extensive one-on-one tutoring, but that kind of overhead can exhaust a teacher.

So the obvious next question is, “Is it different at a distance?”

By now you know me well enough to predict my answer will be, “No.”

In our course, I’ve been watching the levels of angst and joy. You all realize now that the course was set up with a very high threshhold in order to break you out of your comfort zones and normal patterns of thinking. It was a risk because some of you may have seen the work load, lacked the confidence, and bailed out. The offsetting potential was that anybody who stuck it out thru the first week would have a solid set of successes to build on and nothing succeeds like success. Along the way, I’ve been watching for the signs and symptoms that you are — as a group — “getting it.” The posts you’re making tell me how you’re processing and what it is you’re doing with the process. I think all of you have grown very much in your relationship to teaching, learning, education, and technology. Ultimately, that relationship is the governing factor in distance delivery.


Access vs. Participation

October 28th, 2006

Clarence Fisher has an excellent post that feeds nicely into our discussion on assessment:

Remote Access: Access vs. Participation
The idea that we have moved beyond access to the point where we need to concentrate on participation. He stresses that interaction is a word that we often use to describe the technology, but the ability to actively and constructively participate in online communities and what James Gee has called “affinity spaces” defines a culture of participation,a vital skill set essential for our time. He questions the track record that many schools ahve of closing what he calls the “participation gap of this emerging culture:

I saw this the other day when it hit my ‘gator, but Will Richardson linked to it and reminded me.

Take a look at the list of skills that Clarence has summarized there.


Why?

October 26th, 2006

Last night’s chat brought up an interesting subject that I hadn’t really considered. In discussing the Final Projects, I asked a couple of people to write up a short “Why?” document to provide some background on what they were doing. That makes so much inherent sense to me — of couse, I need to know if students are actually recognizing and solving the problems — that I never actually wrote that requirement down. It’s not something I can grade, per se. But as we’re considering assessment this week, this really does capture the essence of what I’m trying to do.

Yes, students need to know what to do, and how to do it, but if I can get them to explain why they think they should be doing it, then I think that I’ve got a good handle on whether or not my course was successful.

Perhaps this should be in next week’s material, but I wanted to get it down while I was thinking about it.


Cool Cat Teacher Blog:

October 25th, 2006

Cool Cat Teacher has a great tool box post.

Cool tools in my cool classroom

Ive been using this conference as an opportunity for me to introduce Web 2.0 in depth to my 10th grade class and basic RSS to 9th. Here are the tools Ive used:

Yes, I know this is student assessment week so how would you

  • use these tools to assess your students
  • assess the work that your students do with these tools

And how is that different from “Will it be on the test?”


Assessment at a Distance

October 24th, 2006

So much of what has been written about assessment at a distance is unfortunate. The emphasis seems largely to be on cheating — as in, how do I know my student didn’t pay somebody to take the exam? — and plagarism — how do I keep them from just turning in somebody else’s work.

Here’s my problem. What’s cheating? Sure, paying Kurt Vonnegut to write a book report on Slaughterhouse Five the way Rodney Dangerfield did in Back to School is probably beyond the edge. What about looking up the answer online? Or asking Bob? If Connectivism is a valid construct than knowing who to ask becomes an important skill. For years, education has given a wink and a nod to the notion that it’s less important to know a fact than to know where to find the fact when you need it. What’s that do for cheating? How can you cheat? If your goal is to assess how much knowledge about a subject that a learner might be able to bring to bear on a problem, then ask him/her to solve a problem. Few people can cheat a “performance” task where most people might not even realize they were cheating a “knowledge” task.

Then there’s the issue of plagarism. If you’re assigning Yet Another Book Report on Tom Sawyer and you’re worried a student will buy last year’s reports, maybe it’s time to rethink that lesson plan. I’m thinking about what you are doing as a class. Yes, it’s possible that somebody is just copying their posts wholesale from somebody else, but I’m going to go on record as saying, “I don’t think so.” Each of you has a unique voice in your space — a signature style and outlook that’s as distinctive as your face. Some of you sound a bit alike, but by and large, I think I’d spot a ringer in there if one of you were to grab a poste and “make it your own.”

Moreover, I assess you every week. I make it easy for you to “do your own work.” Except for the final project, there are not any high stakes assessments that would make it worth your while to try to subvert them. Rephrased, I make it easier for you to just do the work than to try to cheat.

Oh, and if you do cheat, the extra work means you probably learned more.

So what are your ideas about assessment at a distance?


On Assessment

October 23rd, 2006

One of the problems we face it how to asses whether our students learned what we intended them to learn. In a classroom setting, we give tests and quizzes. We give homework that has to be passed in. We assign projects and grade them. But we also look to see who’s keeping up with classroom discussion and who’s actually making cogent commentary. We tend to think that, for the most part, the student in our class did the work if we can see them doing it. We tend to evaluate homework with an eye toward, “Does this sound like Johnny?”

We tend to think of this as valid assessment strategy. Is it?

One of the problems is that the creation of tests and quizzes tends to fall into one of two camps. First, you use the publisher provided tests associated with a book or curriculum. Second, you create your own. The problem is that the instrumentation in either case has not necessarily been tested for validity or reliability. When it comes to the state-wide assessment tests this becomes an even greater problem. If the instrumentation is not valid, or not reliable, then you cannot draw conclusions about the outcomes you measure with it.

To illustrate this problem, suppose you need to find out how much each of your students weighs. You’re given an instrument by the school and told to use it. On test day, you dutifully line the students up, use the instrument according to directions, record the results and discover that all your students scored within a few tenths of 98.6 on the scale. The problem, of course, is that the school gave you a thermometer to measure weight. From our perspective we know a thermometer doesn’t measure weight so the use of this instrument, while reliably measuring something, isn’t measuring what we intend it to. Knowing what we know about weight, temperature, and the instrumentation needed to measure them, we can say pretty emphatically that using a thermometer to measure weight is silly.

But how do you know if your classroom tests are measuring what you intend? We don’t have the experiential knowledge or background information that would allow us to say, “This is silly!” How do we know when it comes to assessing our students learning?

Reliability is the other issue here. Any instrument needs to measure what we think and it needs to do it with some predictable measure of reliability. Reliability is one of those slippery constructs because it’s difficult to really appreciate all the things that could go wrong with reliability. Take the idea of measuring weight. All scales are not as reliable as others. My bathroom scale varies by as much as 20 pounds based on where I stand on tthe platform. It is measuring what I think it is, but the results are not consistent. I can’t say with any degree of confidence what my real weight (mass, really) might be. You have the same problem with your classroom assessments. Assuming validity (which you are, but shouldn’t be) then how do you know it’s working the same way for all students? Does the ESL student have any special problems? How about the SPED student? Does it work the same way with boys and girls? How about with morning or afternoon sessions? Do you know?


Last Outpost

October 22nd, 2006

The last twenty four hours have seen a radical shift in my activity levels as I’m ramping up to begin working seriously in using MUDs as educational environments. I’ve had two goals in mind. One, to give you an example of what I think something appropriate for a “final project” might look like. Two, to provide the threshhold support for people moving into working in the text based environments called MUDs.

To see the result, visit my site: Last Outpost


Education as Art

October 22nd, 2006

The discussion this week has centered on the theory and science of Education. As we make the transition into our final teaching unit before the final project, I’d like to submit to you the idea that what we mean by Education in its purest, noblest form is not Science but Art.
Read the rest of this entry »


Education as Science

October 20th, 2006

Science – Wikipedia
Science in the broadest sense refers to any system of knowledge attained by verifiable means.[1] In a more restricted sense, science refers to a system of acquiring knowledge based on empiricism, experimentation, and methodological naturalism, as well as to the organized body of knowledge humans have gained by such research.

Education is probably classified as a science by most definitions. Educators generally believe that their practice is a system of knowledge attained by verifiable means. Many spend their careers engaged in the research that defines the body of knowledge which encompasses Education. My personal problem with this classification is that, too often, we try to apply the generalizability of science — the value of science to predict and be replicated — to Educational outcomes and I believe that gets us into trouble.

That’s not to say that there are not valid bits that work very well. The use of graphics for educational purposes, for example, is very much science. We can predict rather reliably that an explanation using a simplified diagram generally works better than a photograph when explaining complicated ideas like the way an internal combustion engine works or the water cycle. We know that multi-channel encoding — where a single message is coded in multiple ways and presented simultaneously — works very well. We know that there is some governing mechanism at work in our brains that limits processing to a finite (and small) number of chunks at a time. These are definitely science.

The problem comes when we expect similar levels of outcomes as we apply the same techniques across multiple students. This variability — the distribution of outcomes — is also part of the science of education but the part we overlook because we have a bias towards sciences with less variability. Dropping objects from a roof and measuring the time it takes to fall to the ground gives us an extremely reliable model to predict how long the drop will take for a variety of objects. Dropping Romantic Poetry into a class of high school juniors is substantially less predictable.

One more confounding issue for the science of Education comes in those areas of the field where we’re really talking about superstition. We believe that there is a fact when, in reality, there isn’t. The classic example is “Learning Styles” as a governing characteristic of the learner. Many educators still believe that (a) Learning Styles exist and (b) a student can be reached most effectively if the teacher caters to that style. Let me just state for the record here that I’m using s strict scientific definition. It’s impossible to prove that something does not exist. All one can hope for in a science is to assert that no evidence has been discovered. In this case, despite numerous studies and investigations across a variety of disciplines and populations, no credible support for the theory of learning styles has ever been established. Again, it’s not impossible that learning styles exist, it’s just that there is no evidence to support the widespread belief among educators that each student has a “Learning Style” and that supporting that style produces more effective outcomes.

The idea of a “Learning Style” is a superstition. It’s not science. Until somebody brings me credible evidence, it’s going to remain a superstition.

I’ve seen studies that purport to support the idea, but the methods are almost universally flawed. The typical flaw is using multiple representations of a construct. Repetition and variety are shown to be effective educational strategies and what is attributed to learning style is more logically associated with the repetition and variety which characterize most investigations of learning style.

Therein, lies the problem with Education as Science. The vast majority of practice is supported, not by “verifiable means” but with a series of myths and opinions based on personal experience. The belief that we “hone our practice” in the classroom everyday is valid, but the belief that what we are engaged in is science is not. You can’t teach my class the same way I teach it nor would you want to. Most of what we learn in our own practice is not transferable. Our individual styles and techniques are unique expressions of our own instantiation of Educational practice.

This is a Good Thing.

But it’s not Science.


Inside Higher Ed :: Teachable Moments

October 20th, 2006

Apropos of nothing…

Jobs, News and Views for All of Higher Education – Inside Higher Ed :: Teachable Moments

It took me a moment to understand that, for most people, having that many messages is unusual…