MOOC

I love acronyms and this posts starts with a new one for me? MOOC is Massive Open Online Course.  I’m interested in finding out about this different way of teaching and presenting course content. It’s a little bit overwhelming, because it is not “structured” the

Scuba Diving Clipart
scuba diver

way I structure a class. For example, I’m entering during week 6-so there are previous weeks’ work that I haven’t looked at or completed. That is disconcerting for someone my friends characterize as somewhat structured and a workaholic.

So this post is the first of many I may post about MOOC and how/whether I’ve learned, The topic of this course is online learning (in a very broad sense) and this week’s topic is about OER-Open Educational Resources. So, I’m diving in….

ePortfolios for Assessment

I am attending the Western AAEEBL conference in Salt Lake City Utah on ePortfolios. Helen Barrett was the lunchtime speaker and she provided a great deal of information which I have compiled in tweets at the #11WAAEEBL hastag. [To find those, go to Twitter and type that hashtag in the search box.] Barrett discussed 3  points that I want to note here:

  1. Label the eportfolio with an adjective so we know its purpose, e.g. learning eportfolio
  2. Mobile technology is important for future technologies
  3. Digital storytelling is more than entertainment; it’s also a method of learning

Those items have given me food for thought as I continue my journey to determine whether ePortfolios are solid assessment tool. I’ve discussed this a little bit in a previous post.

Learning More about Teaching and Learning (Or Lack Thereof)

I have not posted much since the beginning of the semester. It’s been hectic. I am currently traveling, so I have begun reading the book Academically Adrift. Note that this is a long post, so it teachersmay take more than a quick glance.

What I have learned from reading the book so far is disturbing. Some of it I knew but some of it was new. The following are my comments on some of the things I learned and my reactions.

  • I learned that there is an inverse relationship between the number of faculty publications and a faculty orientation toward students.

I knew this intuitively, but the book summarizes studies that suggest that faculty in non-research institutions have become more research-focused. This research focus is at the detriment of focusing on students (and teaching and learning). I believe that is true in my school-we have pushed to encourage faculty to publish. If faculty have a finite amount of time to work and the reward lawbooksstructure has shifted to reward publications instead of good teaching, then teaching must suffer.  The one shining light in our school is that teaching-related publications are accepted now as  quality publications. The benefit of that is that faculty can use the research as a way to improve teaching. It will be interesting to see whether that results in improved student learning.

  • I learned that the higher students’ grades in the course, the more positive the student evaluations.

I have heard this many times before. My response has been that if faculty challenge students in courses, students will rise to challenge and they will improve their performance. I also believed that faculty can have higher student evaluations in courses where they are tougher as long as the grading standards are clear and students know what to expect.  My philosophy had always been that challenging students results in student recognition that they can do the work and student effort to complete the work.  I haven’t read the studies to examine the parameters of the studies cited by the authors of ACADEMICALLY ADRIFT, but their summaries of those studies suggest that inflated grades result from reduced demands on students.

I also have anecdotal evidence that this is true. Adjunct faculty only get re-hired if their student evaluations are equal to department averages. Yet we know that student evaluations are nothing more than student satisfaction studies and student satisfaction doesn’t necessarily translate to student learning.  In a previous post, I discussed the study that concluded that students admit that they lie on student evaluations. And if you look at Harvard’s study on implicit assumptions, one of the things that is apparent is that whether a student “likes” a faculty member could depend on factors unrelated to teaching and learning.

As I read, I resisted this suggestion. I want to believe that high quality teaching and challenging demands result in, at least equivalent student evaluations.

My belief has gone down in flames. I advocate change a great deal, but I was certain that students would recognize the value in challenges and would see that as an important quality in an instructor. Must I accept that this is an incorrect belief? I guess I must really re-examine that, because it seems to be contrary to the evidence.

  • I learned that grade inflation probably exists.

According to the studies referenced in the book, grade inflation is real. It seems that every balloongeneration looks at the previous one and says that the current generation is unprepared. One of the studies referenced by the authors says that student class and study time went from 25 hours per week to 14 hours per week during the past 40 years.  (That includes the time spent in class each week.) This result mean either that the students now are smarter and thus need to study less or there’s something seriously wrong in education that students can earn top grades and yet study only 14 hours per week.

Disturbing. I’ll post more as I read and absorb the information in the book.

Disruptive, Transformational Change

In the Chronicle article College Grad Rates Stay Exactly the Same, Kevin Carey concludes: “Most of the growth in higher education has come from older, first-generation, immigrant, and lower-income students. It’s easy enough for skeptics to assert that these students aren’t graduating because they’re not college material. I think this massively discounts the likelihood that institutions whose basic structures and cultures were established decades or even centuries ago, for a particular kind of student, have done a poor job of adapting to the needs of different students going to college in a different time.”

It is a significant challenge for those of us who learned and now teach in educational institutions to restructure education to improve learning for all. I have talked in earlier posts about embracing disruptive, transformational change in teaching and learning. Efforts such as the Red Balloon project, spearheaded by George Mehaffy, Vice President of AACSU, grant programs such as the NextGen learning program and others attempt to initiate discussion and action designed to improve education and access to education for all who are interested.

I’m action oriented, though, so although discussion is a necessary precursor, I really just want to try approaches. And that’s where it can be difficult-where are the resources to create an assessment scheme, pilot new approaches, and determine their effectiveness in the short and long term? If I/we/our institution had the resources, I’d encourage others and myself to jump in and try new approaches. I know I’d probably stub my toes, run into walls, trample on some things that work well and muddle through a great deal, but would hope that in that process I’d have some success with encouraging more learning and improved access to education.

That’s the gist of efforts to improve graduation rates. Regardless of how those rates are measured, it is clear that the rates can be improved. And that improvement should involve all students who are interested in a college education, not only those who have traditionally had success in the current education system.

Integrity in Professional Sports

football[Integrity in other contexts:] This post does not directly relate to online teaching or how people learn. However, I’m a sports fan and I teach a sports marketing law course, so I’m going to create a connection here (whether one exists or not!)

The Denver Broncos were fined for illegally videotaping a portion of the 49ers October 30 practice, before the two teams were to meet. Why is that noteworthy? It’s noteworthy because Josh McDaniels, the Broncos’ coach, was the offensive coordinator for the New England Patriots during the years that Belichick, coach of the Patriots, regularly videotaped others teams. In addition, shortly after Belichick’s fine and loss of draft picks, Bob Kraft, owner of the team, signed Belichick to a long term contract deal.

How does this relate to integrity? Rules violations are normally interpreted to violate ethical principles relating to fairness and the duty to follow the law (depending on which ethical approach you take). If the NFL rules prohibit videotaping other teams’ practices and prohibit videotaping coaching signals during the game, those who do not play by the rules can gain a competitive advantage by their breach of those rules.  (See this rules’ summary in Mayer v. Belichick) This also means that the owner’s support of the conduct can help create an environment that rules violations are acceptable as long as the team wins [and the violations are not caught for a long time].

How does this relate to how people learn? People learn, in part, by following the examples that their leaders set. The NFL’s policy has teeth only to the extent that those who violate those rules are subject to punishment that are sufficient to deter the conduct. In addition, the NFL’s policy has teeth only to the extent that the NFL can negate the lesson taught to assistants (e.g. McDaniels) who learn from coaches (e.g. Belichick) and owners (e.g. Kraft) that ethics violations do not matter as long as the team wins.

In this instance, the videotaping was done by a member of the Bronco’s staff who, apparently, told McDaniels and McDaniels refused to watch the videotape. However, the NFL rules require reporting of  such conduct and McDaniels did not. If you assume that McDaniels did not watch the videotape, at the very least he did violate the NFL rules. However, the lessons he learned from the leadership of his prior team did not demonstrate integrity for that particular rule and thus McDaniels seemed to follow that similar rule-ignoring approach.

[Relationship to this blog and its topics: ]If we’re trying to encourage learners to act with integrity, it’s difficult and frustrating when those in the public eye do not also do so. And that’s this post’s connection to integrity and how people learn.

The “A” Word-Using ePortfolios

The “A” word is Assessment. I blogged about it a couple of days ago and noted that I’d talk about my foray into ePortfolios.

I am reminded of the saying “something old is new again” (although I can’t recall it’s source….). At one point in my children’s education, portfolios was popular.  Some of  you may recall that period. My children were asked to collect their papers to present to teachers and outsiders to evaluate their work. I recall at least one of my children had a porfolio filled with crumpled papers that demonstrated that he wasn’t as concerned with appearance as content!

ePortfolios are based on similar principles. In October, 2010, I had the opportunity to attend a conference at which Dr. Helen Barrett, preeminate expert on ePortfolios, made a presentation.  ePortfolios can be used as formative and/or summative assessments. Dr. Barrett summarizes ePortfolios as ” an electronic collection of evidence that shows your learning journey over time. Portfolios can relate to specific academic fields or your lifelong learning. Evidence may include writing samples, photos, videos, research projects, observations by mentors and peers, and/or reflective thinking. The key aspect of an eportfolio is your reflection on the evidence, such as why it was chosen and what you learned from the process of developing your eportfolio.”

I used ePortfolios during a one year period and hope to use them again in the Spring 2011 semester in at least one class. I used Mahara, an open source ePortfolio system. I used the ePortfolio to (1)encourage student self reflection on their learning as related to the course learning outcomes and (2)encourage student reflection on the work in my course and other courses as they related to the overall mission of the school. I’m currently compiling the results of that use, but the results were mixed, as indicated by the following table.

Table-ePortfolio Student ReactionI still have a great deal of work to do to use ePortfolios to more effectively support assessment, self-assessment and metacognition. But I have great hopes that they can be used, in conjunction with other assessment tools, to reliably and validly assess learning.

Does Learning to Read affect Learning to…..

In an intriguing blog post titled Wired to Read, that summarizes scientific research relating to the brain function and literacy, Peter Wood notes that the scientific research has revealed intriguing evidence that individuals ability to read comes at the cost of other brain functions. According to Wood, the research, based on a comparison of the brains of individuals who learned to read as children, as adults and not at all,  revealed differences in brain function among them. Wood, an anthropologist, posits that this means that some brain functions are sacrificed so that others can work better.

I’m not a scientist, so I cannot speak directly to the validity of the view, but I can use anecdotes from my own life as examples that confirm Wood’s unconfirmed speculation. I love to read and read a lot. My husband doesn’t. My husband and I both love music.  However, my husband’s musical skill and talent; his ability to hear music and replicate that music with his voice and/or with musical instruments is  far surpasses mine.

My own experience also supports Wood’s hypothesis that other skills may be weaker because of the emphasis on, for example, literacy. I am one of the most unobservant people my husband has ever encountered. We’ll walk or drive somewhere and I will be completely oblivious to something that my husband sees as so obvious. I tell him I’m the typical “absent minded professor” but he is unconvinced-he can’t understand why I can’t see something that is so obvious [to him]. And he says, on occasion, “I don’t understand how you can study law, but you can’t see ….[something that is in front of my face]. I laugh, because I don’t understand it either. Now, though I can tell him that it’s because my brain is wired differently.

What implication does that have for teaching and learning? I don’t know yet; I haven’t thought about it sufficiently. It does confirm that application of  Universal Design for Learning principles is a useful way to develop learning activities.

The “A” word–Assessment

Measuring student learning is one of instructor’s most difficult tasks. Assessment is also a difficult task for institutions.

In the article Measuring Student Learning, Many Tools, David Glenn, discusses the issue as an institutional issue and points out that a group of institutions have combined to study different methods of assessment. The group, headed by Charles Blaich, director of Wabash College’s Center of Inquiry in the Liberal Arts, seeks to collect data to determine effectiveness. Cr. Blaich encourages universities to use a variety of tools, as appropriate for the school, to collect data. He also encourages universities to use data they already collect, when possible.

I’ve used a variety of assessment methods in my classes: exams, scoring rubrics, ePortfolios using Mahara (an open source program) and now possibly Taskstream and  computer based testing (such as Criterion, a writing program). I have tried mind mapping, graphic organizers, research papers, short papers, multiple quizzes, take home exams, and oral presentations.

The tension is palpable. I can measure whether someone has memorized the content most easily through a test. I can measure critical thinking and ability to apply through a test. However, does that demonstrate learning or deep learning?  How does one measure learning (see this website, Approaches to Study: Deep and Surface, for more on the concept of deep learning) ?  Measure critical thinking? Measure successful integration of information learned with information previously learned?

So, I muddle along, measuring learning based on how my learning was measured (primarily through multiple-choice, true-false, essay, standardized, nationwide, validated tests-depending on when and what) and I add in what I learn from attending conferences, listening to experts and applying what I’ve learned to my classes in an effort to truly encourage and measure learning. Is it successful? It depends on who you ask.

That’s enough for this post; next post I’ll briefly discuss my foray into ePortfolios, my current preferred assessment method when I have adequate time to process the student information.

As you can see,  I will continue to struggle with the “A” word!

Online Learning Trends

In the article Mapping the Terrain of Online Education, Kenneth Green summarizes the increase in online education.  He notes the increase in the number of students taking online courses and ee notes that a significant number of schools require that faculty receive training. He also refers to a Higher Ed article that explains survey results that the significant obstacles to online learning are internal rather than external.

Fresno State is moving in the direction of increasing its offering of online courses. It’s an exciting trend, especially as the focus continues to be on the quality of the courses rather than merely adding online courses just to add them.

The struggle to evaluate the extent and quality of learning in the traditional environment is mirrored in that same struggle in the online environment. How do we know how much/whether students have learned? How do we measure that learning? Which tools work best for which students? Which instructors are more effective using which teaching methods? Where are the benefits of online learning? How are they to be measured? There are a host of questions and some research to answer the questions.  A core question is how do we measure learning in any environment? If we can answer that question, that can help us determine the effectiveness of a variety of teaching methods.