November 29th, 2006
Every time I see a story like this one I cringe.
Inside Higher Ed :: Laptops’ Educational Value Questioned
A study at Carnegie Mellon University of sophomore classes in its School of Design has found that using laptops changes the way students work — but not all of those changes are positive. Among the positive findings: Students spent more time on assignment.
By this point, everybody in the class should see the flaw here. Without seeing the entire study, I may be over-reacting, but this study appears badly flawed because it’s drawing conclusions about the efficacy of technology based on how students are using it. There is no mention of how it has been implemented! If the students are required to have a laptop, but the coursework has not been designed in a way that having and using a laptop enhances the course, then the study is really meaningless.
Substitute the word “textbook” for “laptop” and consider if we were in the process of implementing textbooks into the classroom. Now read the last sentence …
Among other findings, however, were that the longer hours spent working didn’t translate into better quality of work, and that students were more likely to be isolated and working alone.
If we were doing the same things with texts — asking students to buy them and have them but not actually implementing their use in class — then that sentence might well apply to textbooks as well.
Socrates is alive and well in the Academy.
November 21st, 2006
It’s been so long since I’ve posted, you probably think I’ve given up.
Wrong! Check out the links in this post over at the Shifted Librarian.
The Shifted Librarian: Take a Break and Play
November 16th, 2006
A propos of our discussions on assessment, evaluation, and research, this article appears in the Dept of Education website:
November 14, 2006 — “NCLB Achieves Its Top Goal — Accountability” appeared in the Wisconsin State Journal
When he signed the bipartisan No Child Left Behind Act into law, President Bush said the laws “first principle is accountability.”
Five years later, its a good time to hold the law itself accountable. Is it doing what it promised? Is it working for Wisconsins and Americas kids?
The answer, on both counts, is yes.
No Child Left Behind was the nations collective statement that every child can learn and must be taught.
I encourage you to read the whole thing because I think it really outlines some of the more problematic issues.
Read the rest of this entry »
November 8th, 2006
According to the Federal Government, we’re really missing the boat on research in education. Here’s the documentation to help you understand why you don’t really know what you’re doing. Read the descriptions carefully, and then see if you can justify any of your “Best Practices.”
November 6th, 2006
How do we study Distance Education? If you buy into the notion that all education is at a distance, the answer becomes at once simpler and more complex. Simpler, because it means we don’t need any special Secret Knowledge. More complex, because it means we have to create mental models of this stuff that work regardless of delivery channel.
If there’s a Research Reflex in Distance Ed, it’s surely the “Let’s compare the classroom to the online course and see which is more effective.” Which is a non-starter. More useless research has been done on that model than perhaps any other topic in Education. To begin with, it falls into the category of research known in Educational Technology as “Media Comparison Study.” The main problem is that you’re trying to assess whether one channel or another is more effective. The results show only one of two outcomes — No Significant Difference, or A is better than B. You cannot learn if either is Good. They could both be horrible, but if there’s a differentiated result, then all you can say about it is that one is better. Of course, if there’s no significant difference, you still can’t determine if one of them is good. They could both be equally bad and you won’t know.
The other problem with the media comparison study is the nature of the medium. Which is better at telling a story – movie or novel? What has the better character development – play or tv show? Which makes you feel better – radio play or short story? These are meaningless questions because the answer depends on the implementation and the viewer. A movie that stays true to the underlying novel may be sacrificing the power of the medium. A reader may prefer the pace and depth of a novel over an MTV-cut film. The short answer is, “No.”
So, what do we study when we’re interested in Distance Education?
November 5th, 2006
This week has been light on my posting about the subject directly but many of you have discovered one of the main facts on your own. Evaluation is figuring out if what you’re doing is working. It’s supposed to start with a definition of how you’ll know before you begin. It’s followed up by a periodic progress check of to see if what you’re building looks like it’s going to do what you intend. And after each implementation there’s a post mortem analysis of the project to see if it worked. This final evaluation step is geared at informing changes for the next iteration, or — more commonly — to justify the expense of what you’ve done.
Once you get beyond this simple outline, there’s just not that much to it. Oh, sure, there’s the potential for a lot of troublesome instrumentation to evaluate user response, or system capacity and the like. There are whole university courses based on the subject, but when it comes down to it, the idea is very simple.
Like a lot of other activities, it’s the implementation that’s hard.
November 1st, 2006
These two terms seem to be easily conflated. It’s really simple. Formative evaluation is what you do while you’re building the thing — evaluating it as it’s being formed. Summative is what you do after it’s built and you’ve used it — a summary of how well it worked. The two concepts apply to more than just education. A web designer, for example, might do some user testing for the interface and have some people look over the color schemes while she’s developing a new space. That’s formative. It could be quite a formal evaluation process with a variety of variables under consideration, a lot of data that’s collected, and a whole computer full of statistics that mash up the whole mess. Later, after the space has been implemented and designed, a round of evaluation to determine how well the space achieves its goals.
The key elements here are not how formal or informal the evaluation is. The determinant is when does the evaluation happen.
In our class, you are probably not seeing a lot of the formative stuff. In part, that’s because this course (in various incarnations) is a pretty well-known commodity to me. I’m doing constant formative evaluation by looking at the content and frequency of your posts. I don’t need to ask you how it’s going because i can see it in your weekly work. The process results in things like this post about the differences between formative and summative. My own process of evaluation involves looking at each week as it finishes, and then the cumulative effect over the course. My take in it is that we’re getting tired. It’s been a long slog and we have about a month left. The early days went very well, as you all rose to the challenges I laid out for you. You were able to tie the theoretical stuff back to the early design so you were doing a lot of good, deep processing of the content. You’ve been online — most of you — almost every night I see some of you and it’s often a different crew each evening. The posts have ranged from a tad shallow to absolutely profound. That’s very good. Most discussion board postings don’t get out of the “it had a good beat, I could dance to it” category so we’ve really proven the worth of the technology.
When the class is all over, I’ll be sitting down and looking at the course as a whole. Your blogs, your chats, and your final projects will give me a lot of data about how well the course worked and where it needs work.