Still Adrift in Education

In his essay,’ Academically Adrift’: the News Gets Worse and Worse, Kevin Carey, explains that there is more information that not only do college students fail to learn in college, but also that students who perform lower on the CLA (Collegiate Learning Assessment) also fail to find financial security after graduation.

In an earlier post, I discussed some of the conclusions I reached from the sections of the book which I had read. Those conclusions were:

  • There is an inverse relationship between the number of faculty publications and a faculty orientation toward students.
  • The higher students’ grades in the course, the more positive the student evaluations.
  • Grade inflation probably exists.

In a later post, I discussed critical thinking as a concern: that students don’t “enjoy” the challenge of traditional problem solving the way I (and other faculty) do and that has an impact on whether students learn. If students do not see tackling and solving problems as a challenge (and we as educators should do as much as we can to make problem-solving interesting), then there will be a significant impact on student learning.

A Not So Radical Transformation in a Core Business Course

In the introductory business law course that is required for all business majors, all the faculty teaching the course agreed to make substantial changes in the way the course was taught in order to acknowledge and address perceived efficiencies: students lack of college-level ability to read, college-level ability to write and need to improve critical thinking. Students complained a great deal about the additional work.

Assessing and Working to Improve Reading Skills

Although my own experience with students confirms that it would help for them to have more practice reading and writing, the students did not agree. When asked whether My Reading Lab (a publisher-created product) helped them, students said no:

WhetherMyReadingLabHelped-BA18F11

Note that this response is only the student’s perceptions. We have not yet completed an analysis to determine whether those who performed better on My Reading Lab performed better on the tests or in the course. We will work on analyzing that data later. This also does not included longitudinal data, i.e. would students, upon reflection, decide that they had learned more than they thought by the additional practice reading. However, what this data does show is that students did not embrace the additional reading practice and testing requirement.

Reading the Textbook

Student preparation for class is a concern. Many students do not read before attending class; they attended class then read after class.  In addition, students did not study. As part of the course redesign, we required quizzes prior to students attending class. Students (74.2%) agreed that the quizzes helped them keep up with the reading.  Even though the students said the quizzes helped them keep up with the reading, many still didn’t read everything. The following graph lists the students responses about whether they had read the textbook (this is at the end of the semester):

Percentageofreadingscompleted-BA18F11

Note that 40/202 or 19.8% read 90% or more of the readings and 80/202 or 39.6% read 80-89% of the readings. That means that nearly 60% of the class read 80% or more of the readings. These are the results obtained after faculty required that students read and take a quiz on the material before attending class. Thus, students were more motivated to keep up with the reading. How would these results differ if the students had not been required to take a quiz before attending class?

Studying

Student preparation and studying. The following graph includes information on the hours that students studied.

TimespentstudyingBA18F11

According to these self-reports, 21.2% of students studied between 1 and 3 hours per week, 27.7% of students studied between 3 and 5 hours per week, and 21.7% of students studied between 5 and 7 hours per week.  Students should have studied nearly 8 hours per week (2 hours per week outside class for each hour of class-this was a 4 unit course). In Chapter 4 of Academically Adrift, the authors note that students report spending 12 hours per week on their courses outside of class.  According to figure 4.2 of the book, in a 7 day week, students spent approximately 7% of their time studying.

Conclusions so far

The educational process requires that the faculty and the student participate, and if the students have not completed their share, then education and learning wouldn’t necessarily take place. I don’t know how this data compares to other studies on student reading, but it is challenging to help learning if both parties are not fully invested. Students have a variety of reasons for that lack of involvement, but if the investment in education is relatively small, then improvement in learning will be small.

In addition, this past semester, my student course evaluations were much lower (this was also partly due to a change in the institution’s survey evaluation instrument). Because I am tenured, I do not face losing my job over the changes in my student evaluations (although adjunct faculty face a different reality when it comes to being rehired). However, adjunct faculty depend on good student evaluations in order to keep their jobs. If that is the case, adding rigor to a class could cost that faculty member his or her job.

Higher ed’s move from pursuit of knowledge to pursuit of enterprise dollars and its impact.

This is a summary of another article that I haven’t yet had time to read-so this information is secondhand. It is interesting because it talks about the University’s move from pursuit of knowledge to pursuit of business dollars.

opendistanceteachingandlearning's avataropendistanceteachingandlearning

[Image retrieved from http://tinyurl.com/786af4q, 31 January 2012]

While this week is a “break” week in the change mooc, I decided to reflect on the article to which I referred last week – Rhoades, R.A. 2011. The U.S. research university as a global model: some fundamentals to consider ( InterActions: UCLA Journal of Education and Information Studies, 7(2): 1-27).

My reason for being intrigued by Rhoads’ (2011) analysis and critique is twofold:

  • Throughout this mooc there were questions asked about the future of higher education and more specifically on how advances in technology are shaping teaching and learning. Rhoads’ analysis indicates that there are other forces at work that we should not lose sight of…
  • I completed my application for “rating” by our National Research Foundation (NRF) as part of the institutional imperative to increase the number of NRF rated researchers resulting in a higher standing for my home…

View original post 934 more words

What’s New is Old Again-More High-Tech Cheating

In the Chronicle article With Cheating Only a Click Away, Professors Reduce the Incentive, the author discusses student cheating in the classroom using student response systems or Clickers.

Clickers have been touted as an active learning technique that engages students and improves learning. However, just as with any technique, there is a downside. According to the article author, the larger the class, the more likely it is that students will cheat using clickers. Students cheat by sending a representative to carry their clickers and record responses. And if used for homework, students consult with each other during the class to get homework answers.

Solutions:

  • Count clicker responses as a relatively small percentage of the overall grade (5% or less)
  • Have teaching assistants “patrol” the classes to search for those who have multiple clickers
  • Count the number of attendees and the number of clicker responses (difficult with mega-classes)

For every education innovation or technology, there’s a corresponding reaction by some to minimize effort required and thereby maximum the lack of learning. Use technology but be aware of that tension and take steps to address. I’ve previously discussed cheating in this blog.

Learning More about Teaching and Learning (Or Lack Thereof)

I have not posted much since the beginning of the semester. It’s been hectic. I am currently traveling, so I have begun reading the book Academically Adrift. Note that this is a long post, so it teachersmay take more than a quick glance.

What I have learned from reading the book so far is disturbing. Some of it I knew but some of it was new. The following are my comments on some of the things I learned and my reactions.

  • I learned that there is an inverse relationship between the number of faculty publications and a faculty orientation toward students.

I knew this intuitively, but the book summarizes studies that suggest that faculty in non-research institutions have become more research-focused. This research focus is at the detriment of focusing on students (and teaching and learning). I believe that is true in my school-we have pushed to encourage faculty to publish. If faculty have a finite amount of time to work and the reward lawbooksstructure has shifted to reward publications instead of good teaching, then teaching must suffer.  The one shining light in our school is that teaching-related publications are accepted now as  quality publications. The benefit of that is that faculty can use the research as a way to improve teaching. It will be interesting to see whether that results in improved student learning.

  • I learned that the higher students’ grades in the course, the more positive the student evaluations.

I have heard this many times before. My response has been that if faculty challenge students in courses, students will rise to challenge and they will improve their performance. I also believed that faculty can have higher student evaluations in courses where they are tougher as long as the grading standards are clear and students know what to expect.  My philosophy had always been that challenging students results in student recognition that they can do the work and student effort to complete the work.  I haven’t read the studies to examine the parameters of the studies cited by the authors of ACADEMICALLY ADRIFT, but their summaries of those studies suggest that inflated grades result from reduced demands on students.

I also have anecdotal evidence that this is true. Adjunct faculty only get re-hired if their student evaluations are equal to department averages. Yet we know that student evaluations are nothing more than student satisfaction studies and student satisfaction doesn’t necessarily translate to student learning.  In a previous post, I discussed the study that concluded that students admit that they lie on student evaluations. And if you look at Harvard’s study on implicit assumptions, one of the things that is apparent is that whether a student “likes” a faculty member could depend on factors unrelated to teaching and learning.

As I read, I resisted this suggestion. I want to believe that high quality teaching and challenging demands result in, at least equivalent student evaluations.

My belief has gone down in flames. I advocate change a great deal, but I was certain that students would recognize the value in challenges and would see that as an important quality in an instructor. Must I accept that this is an incorrect belief? I guess I must really re-examine that, because it seems to be contrary to the evidence.

  • I learned that grade inflation probably exists.

According to the studies referenced in the book, grade inflation is real. It seems that every balloongeneration looks at the previous one and says that the current generation is unprepared. One of the studies referenced by the authors says that student class and study time went from 25 hours per week to 14 hours per week during the past 40 years.  (That includes the time spent in class each week.) This result mean either that the students now are smarter and thus need to study less or there’s something seriously wrong in education that students can earn top grades and yet study only 14 hours per week.

Disturbing. I’ll post more as I read and absorb the information in the book.