Still Adrift in Education

In his essay,’ Academically Adrift’: the News Gets Worse and Worse, Kevin Carey, explains that there is more information that not only do college students fail to learn in college, but also that students who perform lower on the CLA (Collegiate Learning Assessment) also fail to find financial security after graduation.

In an earlier post, I discussed some of the conclusions I reached from the sections of the book which I had read. Those conclusions were:

  • There is an inverse relationship between the number of faculty publications and a faculty orientation toward students.
  • The higher students’ grades in the course, the more positive the student evaluations.
  • Grade inflation probably exists.

In a later post, I discussed critical thinking as a concern: that students don’t “enjoy” the challenge of traditional problem solving the way I (and other faculty) do and that has an impact on whether students learn. If students do not see tackling and solving problems as a challenge (and we as educators should do as much as we can to make problem-solving interesting), then there will be a significant impact on student learning.

A Not So Radical Transformation in a Core Business Course

In the introductory business law course that is required for all business majors, all the faculty teaching the course agreed to make substantial changes in the way the course was taught in order to acknowledge and address perceived efficiencies: students lack of college-level ability to read, college-level ability to write and need to improve critical thinking. Students complained a great deal about the additional work.

Assessing and Working to Improve Reading Skills

Although my own experience with students confirms that it would help for them to have more practice reading and writing, the students did not agree. When asked whether My Reading Lab (a publisher-created product) helped them, students said no:

WhetherMyReadingLabHelped-BA18F11

Note that this response is only the student’s perceptions. We have not yet completed an analysis to determine whether those who performed better on My Reading Lab performed better on the tests or in the course. We will work on analyzing that data later. This also does not included longitudinal data, i.e. would students, upon reflection, decide that they had learned more than they thought by the additional practice reading. However, what this data does show is that students did not embrace the additional reading practice and testing requirement.

Reading the Textbook

Student preparation for class is a concern. Many students do not read before attending class; they attended class then read after class.  In addition, students did not study. As part of the course redesign, we required quizzes prior to students attending class. Students (74.2%) agreed that the quizzes helped them keep up with the reading.  Even though the students said the quizzes helped them keep up with the reading, many still didn’t read everything. The following graph lists the students responses about whether they had read the textbook (this is at the end of the semester):

Percentageofreadingscompleted-BA18F11

Note that 40/202 or 19.8% read 90% or more of the readings and 80/202 or 39.6% read 80-89% of the readings. That means that nearly 60% of the class read 80% or more of the readings. These are the results obtained after faculty required that students read and take a quiz on the material before attending class. Thus, students were more motivated to keep up with the reading. How would these results differ if the students had not been required to take a quiz before attending class?

Studying

Student preparation and studying. The following graph includes information on the hours that students studied.

TimespentstudyingBA18F11

According to these self-reports, 21.2% of students studied between 1 and 3 hours per week, 27.7% of students studied between 3 and 5 hours per week, and 21.7% of students studied between 5 and 7 hours per week.  Students should have studied nearly 8 hours per week (2 hours per week outside class for each hour of class-this was a 4 unit course). In Chapter 4 of Academically Adrift, the authors note that students report spending 12 hours per week on their courses outside of class.  According to figure 4.2 of the book, in a 7 day week, students spent approximately 7% of their time studying.

Conclusions so far

The educational process requires that the faculty and the student participate, and if the students have not completed their share, then education and learning wouldn’t necessarily take place. I don’t know how this data compares to other studies on student reading, but it is challenging to help learning if both parties are not fully invested. Students have a variety of reasons for that lack of involvement, but if the investment in education is relatively small, then improvement in learning will be small.

In addition, this past semester, my student course evaluations were much lower (this was also partly due to a change in the institution’s survey evaluation instrument). Because I am tenured, I do not face losing my job over the changes in my student evaluations (although adjunct faculty face a different reality when it comes to being rehired). However, adjunct faculty depend on good student evaluations in order to keep their jobs. If that is the case, adding rigor to a class could cost that faculty member his or her job.

Twearning-Update on Learning

This is a follow up of the first post on Twearning: Twitter + Learning. The first class report on the tweets submitted in and outside of class during the first two weeks of class was good. It highlighted key information and gave all students an opportunity to review the material. The group presented using Prezi and that was an unexpected bonus.

If it’s true that reviewing something again is useful for learning, then this is a step in the right direction. I look forward to what happens long term as we progress through the semester and tackle more difficult topics.

Prezi is a neat presentation tool that permits more 3-dimensional presentation of material. See sample below from my presentation to the Academy of Legal Studies in Business in August 2011 [Click on the title to this blog post to see the full Prezi presentation below-then click on the right arrow button]:

Using Research on Learning to Guide Teaching: Huh?!

It seems perfectly sensible and logical. As educators, we should take advantage of the research on how people learn and use it to guide our teaching. But we don’t! Instead, we stick with the tried and true (I did it this way, I learned this way and if students don’t get it, that’s their problem!) I’ve discussed this issue in other posts, for example, Is Higher Education Ready to Change, but it’s worth repeating.

Harvard recently held a one day symposium on the issue to try to encourage faculty to incorporate cognitive research findings into their teaching. This conference kicked off Harvard’s receipt of a $40-million dollar gift. The gift forms the basis of grants to faculty for Harvard’s Initiative on Learning and Teaching.

In a Chronicle article, Harvard Seeks to Jolt University Teaching, Dan Barrett summarizes explanations of the purposes for the symposium and workshop. Barrett quotes Dr. Weiman, a Nobel prize winning physicist, who has conducted research on science education and how students learn, and who explained that faculty often teach by “habits and hunches.” This is partially because most faculty are content experts and not pedagogy experts.

Other conference speakers noted that students are changing, and that, for example, students are not as curious as before.  Dr. Mahzarin R. Banaj debunked the popular belief that teaching should be designed to fit diverse learning styles-e.g. kinesthetic or visual styles. Others noted the importance of quizzing and frequent writing.

So what dDivingoes this mean? It means that Universities should encourage faculty to develop evidence-based teaching practices. It means that faculty workloads would have to be adjusted to permit time for faculty to implement and evaluate new methods of teaching. It means that Universities should assist faculty to assess the impact of these new methods of teaching. The University of Central Florida has a center devoted to helping faculty assess the impact of their teaching.  I’m ready to try it!

Improving Critical Thinking in Higher Education-Possibly

In Chapter 2 of Academically Adrift,  authors note that improving critical thinking is a skill that many university’s tout as one of their leading goals. Yet, according to the study, the improvement in critical thinking during the first two years of school is minimal at best—according to the authors, the improvement is statistically not above zero. The book authors state

An astounding proportion of students are progressing through higher education today without measurable gains  [emphasis added] in general skills as assessed by the CLA. While they may be acquiring subject specific knowledge or greater self-awareness on their journeys through college, many students are not improving their skills in critical thinking, complex reasoning, and writing.” (ch 2-reading on Kindle so don’t have page #)

How can that be? Institutions require that students take a package of courses and one key goal is to improve critical thinking. How is it that institutions can miss the mark by so much? Similarly GE courses have as one of their goals requiring students to write a minimum number of words in a course. Why is it that students cannot write (well) after their first few semesters in college?

I have a couple of thoughts (I still haven’t finished the book to find the authors’ suggestions). One is that faculty have not been taught how to teach critical thinking. Most of us teach the substantive content in our disciplines and teach primarily in the way that we had been taught. We presume that if we learned that way, then students can learn that way.

I enjoy critical thinking questions and challenges, yet I am not certain that I do a good job teaching students how to think critically. (And we don’t’ always agree what that means.) I try thinking, musing girl silhouetteto model how we think in the discipline through the way I solve problems, but I don’t know whether I’m helping students learn how to do it or not.

Critical thinking is a skill and habit of mind that must be practiced. At the same time, one must have an interest in it. If I am giving a complex problem, case scenario or reading, I dive in. I presume that I will be able to read through it and analyze it enough so that I can understand it. I see it as a challenge to try to understand it.

Many of my students do not approach tough material or a complex scenario with the same gusto. They seem to just want  me to tell them the answer and they are uncomfortable with the idea that there could be multiple ways of approaching the issue and multiple solutions depending on one’s interpretation of the scenario. They are not comfortable with the idea that I want to know how they arrived at the solution—they just want to know whether their solution is the “correct” solution.

So far, the reading implicitly presents an argument that these initial courses should be taught by full time-tenured faculty who have had guidance in learning how to teach someone to develop critical thinking skills. On most campuses, however, the faculty who tech the GE courses are part time or adjunct faculty and those faculty may be excluded from opportunities to learn how to teach critical thinking more effectively.

My theory on lack of writing ability is based on my concerns with student plagiarism. I will not repeat here what I explained in an earlier post.

Students Fail Because Colleges Fail

The Spring 2011 semester begins tomorrow, Wednesday, January 19. As I review my course syllabi one more time and ponder the weights to assign various assignments, I looked at today’s issue of the Chronicle-Faculty and read a blog post titled: New Book Lays Failure to Learn on Colleges’ Doorstep by David Glenn.

In that post, Glenn summarizes the findings in the recently released book, Academically Adrift: Limited Learning on College Campuses (University of Chicago Press). The book presents evidence, based on student scores on the Collegiate Learning Assessment, that faculty do not demand enough of students and thus students are ill-prepared by the time they graduate. One of the more disturbing, but not surprising conclusions include students self-report that they study 12 hours per week (the Carnegie study recommends that students study 2 hours for every hour in class, which is a minimum of 24 hours per week for a 12 unit semester load).

That conclusion matches what I’ve found when I’ve spoken with students, especially those who are struggling. Many do not know how many hours per week to study andBooks even more surprising, many do not know HOW to study.

During the past few semesters, I have included in the course syllabi of the undergraduate courses a recommendation that students study a certain number of hours per week and tips on how to study.  Another recent change has been to spend time discussing how to take tests-as faculty we assume that students know how to prepare for and have developed strategies to take tests. Many have not.

The book’s basis, results from the Collegiate Learning Assessment, does have the limitations that are noted in the article. However, one result should be that colleges should create a required course at the beginning of a students’ career that focuses on preparation for college-so that students know what is expected and thus can be better prepared. Another result is that faculty should not be afraid to challenge students and expect that they can do the work. Although that increases faculty workload and effort, it is necessary in order to graduate students who are truly prepared.

Innovation in Academia

One of the pleasant benefits of my current position-working with faculty using technology-is Wordle combination of multiple disciplinary names, e.g. law, art, English, etc.that I have the opportunity to meet faculty from many academic disciplines and to discuss what they teach and how they teach it. It reinvigorates me and I learn different approaches to teaching my own subject. In addition, through this work I met a group of faculty who have worked together to write a manuscript on using videos to engage students and encourage critical thinking. The manuscript is under revision now.

I have frequently lamented universities’ lack of substantive support for cross-disciplinary collaboration and teaching. Perhaps that is because my area of expertise is legal studies-and legal studies are multi-discipinary. So, it was with interest that I read the article Communicating Across the Academic Divide in a January 2, 2011 commentary in the Chronicle of Higher Education. In that post, the author discussed one critical issue that is a barrier to such cross-disciplinary collaboration: inability to easily communicate. The author stated, “Talking across disciplines is as difficult as talking to someone from another culture. Differences in language are the least of the problems; translations may be tedious and not entirely accurate, but they are relatively easy to accomplish. What is much more difficult is coming to understand and accept the way colleagues from different disciplines think—their assumptions and their methods of discerning, evaluating, and reporting “truth”—their disciplinary cultures and habits of mind.”

Interesting and provocative. I had always thought that significant innovation could occur through cross-disciplinary conversations  and had been frustrated by the lack of consistent, sustainable University encouragement of such efforts. But the author’s point is well taken.

While writing the article on using videos, it was evident that we each had different habits of mind and approaches to what would be required for the article. We reachdance or fight shadow imagesed a rough compromise and I hope that that compromise will result in a published article (the first submission was rejected), but my experience confirmed what this author learned: the challenge may be in convincing our colleagues that each of our approaches is genuinely valuable. My experience in coordinating and co-writing the article was positive, yet there were differences in approaches that had to be resolved. And we each had an interest in engaging students and encouraging critical thinking, so our different approaches did not prevent us from reaching a mutually beneficial compromise.

Downsides of Curricular Innovation

Escape buttonDoes innovation mean dumbing down?

The National Center for Academic Transformation (NCAT) urges institutions to develop low cost, effective methods to deliver course content and improve learning. Efficient uses of technology are integral to that process.

But what if the technology is a substitute for professors? That is one of the greatest fears of academics-that faculty will be replaced with professors in a box who do not “teach” and that instead serve as reviewers similar to instructors in correspondence courses.

In the article A Curricular Innovation, Reexamined, an Inside Higher Education special report on a for-credit set of courses organized by StraighterLine, the organization raised questions about the use of technology in teaching. According to the article, the courses are cheap (unlimited for $99 per month, $399/course, or 10 for $999) and are accepted for credit at some institutions. The report highlighted some positives-individual tutoring and the ability to self-test for improvement and some negatives–older course materials and significant numbers of errors.

I think that quality can be incorporated into online courses. The report reminds me 50s Robotthat we need to be vigilant to be sure that online materials must be checked for rigor. The report also reminds me that face-to-face courses are seldom rigorously evaluated and should be subject to the similar oversight for quality.

Have you ever presented materials (PowerPoint slides, handouts, exams), that had errors? Have you ever said something in class that was wrong and later had to correct it? In a face-to-face class, the only people who know you made those errors are the students who saw the materials. Seldom do our peers review all our materials and note errors. In an online class, those items are memoralized electronically in the course and thus errors can be more easily identified. Since many faculty want to check online course materials more carefully, the errors become a basis for arguing that online education and materials are inferior.

So what does that mean for innovation? We must innovate and as faculty we should be integrally involved in oversight of face to face and online courses. We have to figure out the balance between academic freedom and evaluating quality, but some of the problems discovered in online courses are also equally evident upon review of face to face courses.

Let’s treat both with equal rigor.

Another “A” word-Course Evaluations

In a summary of a future to-be-published article, the Chronicle noted in an article titled  “Students Lie on Course Evaluations” that students admitted lying in a way that harms faculty in student course evaluations. That’s the “A” word that relates to faculty promotion and evaluation-assessments students make of faculty.

Faculty who study the field know that course evaluations should be only one of many items that are considered when evaluating faculty performance for retention, tenure or promotion. Our University’s policy is that there are many factors that should be factored in to a decision about how well faculty encourage student learning. Yet, many who are on faculty committees will spend an inordinate about of time developing and applying complex formulas to incorporate the results of course evaluations in a way that makes those course evaluations the pre-eminent determinant of faculty performance.

Why is it that we are so comfortable with using course evaluation numbers as the primary factor to determine whether someone is a good teacher? Those numbers can be so easily manipulated. I know of some faculty who give students treats before the evaluations are administered. That has an impact on perception.

A couple of times I’ve returned exam results immediately before evaluations were administered. Since everyone was not happy with those results, my course evaluations declined. Although my overall course evaluation numbers have been good, I know that I have done things that have an impact on the evaluations and that those things are not directly related to teaching.

Perhaps the study referenced in the article will help us agree on the appropriate weight for student evaluations so that faculty can work on creating a learning environment instead of pleasing students.

Disruptive, Transformational Change

In the Chronicle article College Grad Rates Stay Exactly the Same, Kevin Carey concludes: “Most of the growth in higher education has come from older, first-generation, immigrant, and lower-income students. It’s easy enough for skeptics to assert that these students aren’t graduating because they’re not college material. I think this massively discounts the likelihood that institutions whose basic structures and cultures were established decades or even centuries ago, for a particular kind of student, have done a poor job of adapting to the needs of different students going to college in a different time.”

It is a significant challenge for those of us who learned and now teach in educational institutions to restructure education to improve learning for all. I have talked in earlier posts about embracing disruptive, transformational change in teaching and learning. Efforts such as the Red Balloon project, spearheaded by George Mehaffy, Vice President of AACSU, grant programs such as the NextGen learning program and others attempt to initiate discussion and action designed to improve education and access to education for all who are interested.

I’m action oriented, though, so although discussion is a necessary precursor, I really just want to try approaches. And that’s where it can be difficult-where are the resources to create an assessment scheme, pilot new approaches, and determine their effectiveness in the short and long term? If I/we/our institution had the resources, I’d encourage others and myself to jump in and try new approaches. I know I’d probably stub my toes, run into walls, trample on some things that work well and muddle through a great deal, but would hope that in that process I’d have some success with encouraging more learning and improved access to education.

That’s the gist of efforts to improve graduation rates. Regardless of how those rates are measured, it is clear that the rates can be improved. And that improvement should involve all students who are interested in a college education, not only those who have traditionally had success in the current education system.

The “A” word–Assessment

Measuring student learning is one of instructor’s most difficult tasks. Assessment is also a difficult task for institutions.

In the article Measuring Student Learning, Many Tools, David Glenn, discusses the issue as an institutional issue and points out that a group of institutions have combined to study different methods of assessment. The group, headed by Charles Blaich, director of Wabash College’s Center of Inquiry in the Liberal Arts, seeks to collect data to determine effectiveness. Cr. Blaich encourages universities to use a variety of tools, as appropriate for the school, to collect data. He also encourages universities to use data they already collect, when possible.

I’ve used a variety of assessment methods in my classes: exams, scoring rubrics, ePortfolios using Mahara (an open source program) and now possibly Taskstream and  computer based testing (such as Criterion, a writing program). I have tried mind mapping, graphic organizers, research papers, short papers, multiple quizzes, take home exams, and oral presentations.

The tension is palpable. I can measure whether someone has memorized the content most easily through a test. I can measure critical thinking and ability to apply through a test. However, does that demonstrate learning or deep learning?  How does one measure learning (see this website, Approaches to Study: Deep and Surface, for more on the concept of deep learning) ?  Measure critical thinking? Measure successful integration of information learned with information previously learned?

So, I muddle along, measuring learning based on how my learning was measured (primarily through multiple-choice, true-false, essay, standardized, nationwide, validated tests-depending on when and what) and I add in what I learn from attending conferences, listening to experts and applying what I’ve learned to my classes in an effort to truly encourage and measure learning. Is it successful? It depends on who you ask.

That’s enough for this post; next post I’ll briefly discuss my foray into ePortfolios, my current preferred assessment method when I have adequate time to process the student information.

As you can see,  I will continue to struggle with the “A” word!