- About MASA
- Government Relations
- Member Toolbox
- 90 Days to Success
- Administrator Certification
- Continuing Conversations
- Contract, Evaluation and Retirement Toolkit
- Courageous Journey
- Educator Evaluation Resource Center
- Executive Coaching and Mentoring
- Interim Administrator Services
- Media Library
- Newer Superintendent Services
- RDI & Mobile Device Toolbox
- Save Time and Money
- School Leadership Briefing
- Solutions Center
- Superintendent's Year
In This Issue:
- Assessments that produce real learning
- Better than postmortems: after-action reviews
- Teacher evaluation that improves classroom performance
- Teacher evaluation done right
- Boston students push to include their opinions in teachers’ evaluations
- More from the Measures of Effective Teaching study
- Project-based learning in an Oregon alternative high school
- How Washington has affected K-12 education in recent decades
- Teen births decline
Quotes of the Week
“We cannot enrich the minds of our students by testing them on texts that purposefully ignore their hearts. By doing so, we are withholding from our neediest students any reason to read at all. We are teaching them that words do not dazzle but confound. We may succeed in raising test scores by relying on these methods, but we will fail to teach them that reading can be transformative and that it belongs to them.”
-Claire Needell Hollander, New York City middle-school teacher, in “Teach the Books, Touch the Heart” in The New York Times, Apr. 22, 2012, p. 4, http://nyti.ms/Jv8pAw
“Rather than telling students to study for exams, we should be telling them to study for learning and understanding.”
-David Jaffee (see item #1)
“If a student is well prepared, algebra is a good thing regardless of the student’s age, but if a student is not prepared, it can be a bad thing, regardless of the student’s age.”
-Tom Loveless, quoted in “Researchers Suggest Early Algebra Harmful to Struggling Students” by Sarah Sparks in Education Week, Apr. 25, 2012 (Vol. 31, #29, p. 10), http://www.edweek.org/ew/articles/2012/04/20/29aera.h31.html
“It’s inefficient to withhold key learnings from other teams and allow them to make the same mistakes or prevent them from replicating best practices.”
-Todd Henshaw (see item #2)
“She’s caught me being a phenomenal teacher, and has also seen moments of shame, but ten varied visits provide her with a picture of me that is actually…me!”
=Boston teacher Lillie Marshall on her supervisor’s frequent visits (see item #4)
“I think the current generation of youth are perhaps more conscientious and cautious.”
1. Assessments That Produce Real Learning
In this thoughtful Chronicle of Higher Education article, University of North Florida professor David Jaffee says that teachers’ frequent exhortation to their students to study for exams “actually encourages student behaviors and dispositions that work against the larger purpose of human intellectual development and learning. Rather than telling students to study for exams, we should be telling them to study for learning and understanding.”
It bothers Jaffee and his colleagues that so many students have an instrumental view of college: I’m taking this course to get a passing grade, meet a requirement, graduate with a degree, get a job, make money, and be happy. “Everything is a means to an end,” says Jaffee. “Nothing is an end in itself. There is no higher purpose.”
And yet teachers constantly reinforce this view of life. “When we tell students to study for the exam or, more to the point, to study so they can do well on the exam, we powerfully reinforce that way of thinking,” he says. “On the one hand, we tell students to value learning for learning’s sake; on the other, we tell students they’d better know this or that, or they’d better take notes, or they’d better read the book, because it will be on the next exam; if they don’t do these things, they will pay the price in academic failure. This communicates to students that the process of intellectual inquiry, academic exploration, and acquiring knowledge is a purely instrumental activity – designed to ensure success on the next assessment.” No wonder students are constantly asking whether something will be on the test.
“This dysfunctional system reaches its zenith,” Jaffee continues, “with the cumulative ‘final’ exam. We even go so far as to commemorate this sacred academic ritual by setting aside a specially designated ‘exam week’ at the end of each term. This collective exercise in sadism encourages students to cram everything that they think they need to ‘know’ (temporarily for the exam) into their brains, deprive themselves of sleep and leisure activities, complete (or more likely finally start) term papers, and memorize mounds of information.”
Dysfunctional? Yes, because cognitive scientists say human learning occurs only when there is retention and transfer. “Retention involves the ability to actually remember what was presumably ‘learned’ more than two weeks beyond the end of the term,” says Jaffee. “Transfer is the ability to use and apply that knowledge for subsequent understanding and analysis. Based on this definition, there is not much learning taking place in college courses.” Here’s the logic:
- Research shows that short-term memorizing – cramming – doesn’t contribute to retention or transfer.
- It may, however, yield short-term results in exam scores.
- Many final exams are high stakes, determining a large part of the final course grade.
- This leads students to cram for exams.
- Therefore, many students will have little long-term learning from their courses.
This explains why so many students don’t know material that was “covered” in previous courses. “The reason they don’t know it is because they did not learn it,” says Jaffee. “Covering content is not the same as learning it.”
What is to be done? Jaffee says two approaches to assessment will solve the problem [and this goes for K-12 education as well]: formative and authentic. “Used jointly,” he says, “they can move us toward a healthier learning environment that avoids high-stakes examinations and intermittent cramming.”
- Formative assessments – These during-the-year checks for understanding (which don’t require formal grading) combine teaching and learning, allowing students to develop their abilities, assess their progress, and zero in on areas that need improvement.
- Authentic assessments – These are often “open book” and require students to demonstrate and apply what they have learned – theories, concepts, principles, etc. – to solve a problem they might encounter in the real world.
Many universities are moving in this direction, says Jaffee. For example, some professors are no longer required to give final exams and alternative assessments are on the rise. “Yes, our mantra of ‘studying for exams’ has created and nourished a monster,” he concludes, “but it’s not too late to kill it.”
“Stop Telling Students to Study for Exams” by David Jaffee in The Chronicle of Higher Education, Apr. 27, 2012 (Vol. LVIII, #34, p. A35),
2. Better Than Post-Mortems: After-Action Reviews
In this intriguing Wharton Leadership Digest article with K-12 implications, Penn professor Todd Henshaw (formerly Director of Military Leadership at West Point) describes the U.S. Army’s “after-action review” process, which was developed in the 1970s to help soldiers learn from mistakes and achievements. After-action review, which has been called one of the most successful organizational learning methods ever devised, consists of an active discussion of four key questions:
- What did we intend to accomplish – i.e., what was our strategy?
- What did we do – i.e., how did we execute relative to our strategy?
- Why did it happen that way – i.e., why was there a strategy/execution gap?
- What will we do to adapt our strategy or refine our execution for a better outcome – OR how do we repeat our success?
Henshaw says after-action reviews have been extremely helpful in the military, but attempts to use them in the corporate world have often been unsuccessful, largely because they are reduced to what Peter Senge calls a “sterile technique.”
For after-action reviews to improve team performance (and become a catalyst for cultural change), Henshaw says “leaders must create a climate of transparency, selflessness, and candor where team members can challenge current ways of thinking and performing. Everyone – leaders included – must openly share where their own performance may have contributed to a team failure, and to acknowledge the people and practices that helped create the team’s success.”
Here are Henshaw’s suggestions for making after-action reviews a “living practice” that transforms team performance and becomes part of the organization’s DNA:
- Schedule after-action reviews consistently. “‘Postmortems’ have a negative connotation that discourages participation and enthusiasm,” says Henshaw. Leaders should hold after-action reviews immediately after successful or unsuccessful events, “using the positive positioning of improving your own performance and not that of someone else.”
- Gather relevant facts and figures. Specifically, what went well? What didn’t?
- Make participation mandatory. Everyone on the team should be involved in the discussion. “Each participant will likely have a different perspective on the event, and this serves as a key input into the after-action review,” says Henshaw. “Open-ended questions that are related to specific standards or expectations will encourage involvement.”
- Focus on three things: the performance of team members, the leader, and the team as a whole. “Keep the attention on facts and outcomes,” Henshaw advises. “What are the strengths and weaknesses of each?” This keeps the discussion centered on what the team can control, as opposed to external factors.
- Follow the “rules of engagement.” To encourage honest participation and mutual trust, there must be: confidentiality (joint learning is shared, individual comments are not); transparency; focus on individual and team improvement and development; and preparation for “next time.”
- Share learning across the organization. This might mean using meetings and blogs to make the lessons of after-action reviews available to other teams. “It’s inefficient to withhold key learnings from other teams and allow them to make the same mistakes or prevent them from replicating best practices,” says Henshaw.
- Consider a before-action review. Before your next significant challenge, why not convene the team and review lessons learned and how they can be put to work?
“After-Action Reviews” by Todd Henshaw in Wharton Leadership Digest – Nano Tools for Leaders, Apr. 24, 2012,
3. Teacher Evaluation That Improves Classroom Performance
In this Harvard Educational Review article, Brown University professor John Papay notes three points of agreement among researchers and practitioners:
- Teachers are the most important school-level factor in student learning.
- There is wide variation in different teachers’ impact on achievement.
- The teacher evaluation system in most districts is not working well.
“In such a system,” says Papay, “not only do administrators and policy makers gain no real information about teacher effectiveness, but teachers receive no meaningful feedback to help them improve their instructional practices.”
This situation has led many districts to consider using value-added student achievement data to evaluate individual teachers. Although this approach appears to be sophisticated and seems to make sense, Papay questions its accuracy, reliability, and validity. He concludes that it’s no better than the traditional classroom observation process in measuring classroom effectiveness – and that’s not saying much!
Which brings him to his main point – that teacher evaluation needs to go beyond measuring classroom performance and focus much more on helping teachers get better. “If teacher evaluation is to improve student learning systematically,” he says, “it must be used as a tool to promote continued teacher development. Using teacher evaluations in this manner holds much more promise for comprehensive change than identifying (and rewarding or sanctioning) the best and worst performers.”
In most cases, Papay says, value-added data do very little to help teachers develop their skills. Classroom observations, on the other hand, have that potential – if visits are frequent and unannounced, based on clear standards of good teaching, with well-trained administrators giving teachers candid, helpful comments on what they see. “Effective evaluators must be willing to provide tough assessments and to make judgments about the practice, not the person,” says Papay. “They must also be expert in providing rich, meaningful, and actionable feedback to the teachers they evaluate.”
This can be done by peers as well as administrators, he notes, which reduces the burdens on overworked principals and assistant principals. The Peer Assistance and Review (PAR) program does just this.
“Refocusing the Debate: Assessing the Purposes and Tools of Teacher Evaluation” by John Papay in Harvard Educational Review, Spring 2012 (Vol. 82, # 1, p. 123-141),
4. Teacher Evaluation Done Right
In this Huffington Post article, Boston teacher Lillie Marshall [my daughter] cites some of the results of a recent Teach Plus survey of Massachusetts’s most troubled schools:
- 41 percent of teachers rated their evaluators as fair or poor overall;
- 35 percent rated the quality of feedback they received from evaluators as fair or poor;
- 45 percent rated their evaluators fair or poor in content knowledge.
“This needs fixing urgently,” says Marshall. “…if we’re not getting this right, it has the potential to sabotage everything else.”
In marked contrast, this is the way Marshall describes her supervision by her school’s history and ELA department head, Tracy Wagner:
- Content knowledge – Wagner has ten years of successful teaching experience with a similar student population; “she knows her stuff,” says Marshall, “and I trust her. The action steps she provides work.”
- Frequent visits – Wagner pops into Marshall’s classroom at least ten times a year for 10-20 minutes, providing “a much more authentic understanding of me as a teacher than just one or two fancy, announced, full-class observations. She’s caught me being a phenomenal teacher, and has also seen moments of shame, but ten varied visits provide her with a picture of me that is actually…me!”
- Face-to-face feedback – Soon after each visit, Wagner has a casual conversation with the teacher. “Let’s be real,” says Marshall, “a specific, frank, timely conversation provides teachers with far more valuable feedback than a formal observation write-up. Talking allows me to give Tracy the context of the other 99% of my teaching which she doesn’t observe, and lets us delve deeper into her observations and next steps.” Wagner also follows up with written notes on each observation.
- Looking at learning – “During her observations, my evaluator looks at student work and talks with students to gauge understanding,” says Marshall, “Tracy is able to give me concrete feedback on what skills my students are getting, and suggestions for which specific skills I should focus on next.” For example, during a classroom visit in February, Wagner noticed that Marshall’s seventh graders were doing a good job selecting evidence in their essays but needed more work analyzing how that evidence proved their theses. She made this point to Marshall afterward and printed out three suggested lesson plans. In a subsequent visit, Wagner could see that those lessons had worked – students were using evidence more effectively. “Now that they’ve got the foundation set, you can teach lessons on spicing up word choice,” she said.
- I’m your evaluator and I’m here to help you. “How I see it,” says Wagner, “my job is to meet each teacher wherever they are in the path to improving their craft, and to walk them further along that journey.”
Marshall concludes, “How lovely it is, as professionals, to have affirmation that we’re growing, and to receive concrete ways to produce further growth… When done right… evaluation… provides not only accountability, but also a welcome boost to the next level of excellence.”
“5 Teacher Evaluation Must-Haves” by Lillie Marshall in The Huffington Post, Apr. 27, 2012
5. Boston Students Push to Include Their Opinions in Teachers’ Evaluations
“As people across the country discuss supporting and evaluating teachers, why are they not involving those with the most intimate knowledge of the classroom?” ask members of the Boston Student Advisory Council (BSAC) in this Harvard Educational Review article. “As students, we are the ones in the classroom, and our futures are affected by what happens there every day.”
BSAC leaders describe how they approached Boston central office and teachers’ union officials in 2006 with the idea of including student opinions in teachers’ evaluations, and piloted their Friendly Feedback Form in one Boston high school during the 2007-08 school year. Teachers suggested some edits to the questionnaire, 400 students filled it out anonymously, the results were tabulated, teachers received a summary of the feedback in sealed envelopes, and student leaders presented the overall results in a schoolwide professional development meeting, highlighting best practices being used in classrooms and areas for improvement. “In addition to learning valuable facts and figures from this session,” they write, “this discussion allowed students and teachers to improve their relationships and promote a more positive school culture.”
In another Boston high school, teachers balked at being evaluated by students, but the administrators volunteered. BSAC members created an Administrator Constructive Feedback Form and went through a similar process with the administrative team with positive results.
Based on the success of these pilots, BSAC leaders proposed that all Boston high-school students should evaluate their teachers. They worked with district and union officials to create a two-page questionnaire on student learning and classroom management and instruction that could be filled out in less than 15 minutes, with open-response questions voluntary in case students were worried that their handwriting could be recognized.
BSAC members were careful to include a section for students to reflect on their own learning practices in each classroom. “This self-reflection would help students take more ownership of their education and also reduce the potential for ‘teacher bashing’,” they write. “Favoring easy teachers and penalizing demanding teachers was a huge concern from many of the people with whom we met. In order to alleviate this concern, we decided it was important to evaluate ourselves too. If we could not honestly and openly respond to questions about our own learning, then perhaps we could not honestly provide feedback to our instructors.”
Boston’s school board unanimously approved the questionnaire and implementation plan in May 2010, and it was implemented in 29 Boston high schools during the 2010-11 school year. The response was “overwhelmingly positive,” according to the authors, with teachers saying it gave them a better understanding of how students were learning and specific ideas for improving instruction.
The next step was pushing to have student feedback included as an official part of Boston teachers’ evaluations. In a survey, 86 percent of the city’s high-school headmasters supported this move. BSAC didn’t stop there; its leaders began actively campaigning to have student voice included in the new Massachusetts teacher evaluation system. In June 2011, the Massachusetts Board of Elementary and Secondary Education voted on an evaluation framework that includes student feedback in teacher evaluations beginning in 2013-14, with details to be worked out after further study.
The BSAC authors end by noting the support among students around the nation for a voice in teacher evaluation. Some school districts are moving in this direction – for example, the Brookline (MA) schools strongly encourage teachers to solicit student feedback as part of their evaluations. Getting teachers on board with the idea is a prime BSAC goal: “After all, the messaging of the importance of student voice in teacher evaluations will be much more powerful coming from teachers and students together. We need to continue improving the teacher evaluation system as a cohesive unit. Teachers and students are both heavily invested in the education system,” they say. “We have to work together.”
“‘We Are the Ones in the Classrooms – Ask Us!’ Student Voice in Teacher Evaluations” by the Boston Student Advisory Council: Abibatu Bayoh, Dan Chu, Adam Fischer, Cheria Funches, Ayan Hassan, Teena-Marie Johnson, Damien Leach, Xin Jian (Peter)Li, Eseniolla Maitre, Steve Marcelin, Will Poff-Webster, Carlos Rojas, Christina Moriah Smith, Colin Smith, Dennis Tan, Rosanna Velasquez, Mengning (Melinda) Wang, Rachel Wingert, and adult staff: Rachel Gunther, Caroline Lau, Maria Ortiz, and Jenny Sazama in Harvard Educational Review, Spring 2012 (Vol. 82, # 1, p. 153-162),
6. More from the Measures of Effective Teaching Study
In this Education Week article, Sarah Sparks says the latest data from the massive Gates-funded Measures of Effective Teaching Project “may give pause to districts working to develop teacher-effectiveness evaluations.” MET researchers are finding that assessments of teachers similar to those used in some district value-added systems “aren’t good at showing which differences are important between the most- and least-effective educators, and often misunderstand the ‘messy middle’ that most teachers occupy.”
“The middle is a lot messier than a lot of state policies would lead us to believe,” said MET director Steve Cantrell at a recent AERA conference in Vancouver, BC. “Based on the practice data, if I look at the quartiles, all that separates the 25th and 75th on a class [observation] instrument is .68 – less than 10 percent of the scale distribution. In a lot of systems, the 75th percentile teacher is considered a leader and the 25th percentile is considered a laggard.”
As for the idea of firing the lowest-performing quartile of teachers, Cantrell says that would have very little impact on the quality of instruction in a school. After observing and analyzing more than 24,000 lessons, MET researchers have concluded that the differences between effective and ineffective teachers lie mostly in the area of classroom management and behavior, not academic rigor and quality. Generally, classroom practice is “orderly but unambitious,” said Cantrell.
Another MET researcher at the AERA conference, Rutgers professor Drew Gitomer, says that the way teachers frame questions is critically important to uncovering and fixing students’ misconceptions. For example, it’s more helpful for a math teacher to give students three cubed rather than two squared an example of exponents: two squared would produce the same answer (4) if students erroneously multiplied the number by the exponent, whereas three cubed, if solved incorrectly, would reveal the misconception.
Gitomer conducted in-depth interviews with 60 teachers and found that the lower-performing teachers often had weak reasoning for instructional decisions – they lost track of the larger purpose behind a lesson and used personal preference rather than best practices to decide how to proceed. Stronger teachers, on the other hand, used questions to look at larger classes of problems and could describe how their approach improved student learning.
Another AERA presenter, Ronald Ferguson (Harvard University) presented evidence that students’ assessments of their teachers have a high correlation with student achievement. Ferguson asks students detailed questions that get at “seven C’s” of teaching practice:
- Caring about students;
- Captivating them by showing learning is relevant;
- Conferring with students to show their ideas are welcome and respected;
- Clarifying lessons so knowledge seems feasible;
- Consolidating knowledge so lessons are connected and integrated;
- Controlling behavior so students stay on task;
- Challenging students to achieve.
Students taught by teachers who scored in the top quartile on the seven C’s on anonymous student surveys achieved a full semester above students taught by teachers scoring in the bottom quartile.
“MET Studies Seek More Nuanced Look at Teaching Quality” by Sarah Sparks in Education Week, Apr. 25, 2012 (Vol. 31, #29, p. 12),
7. Project-Based Learning in an Oregon Alternative High School
In this Education Week article, Liana Heitin reports on the turnaround of the Al Kennedy Alternative School in Cottage Grove, Oregon. When Tom Horn showed up as the new principal several years ago, he was cursed out by students smoking cigarettes and what might have been marijuana near the front door. The attendance rate among the school’s poverty-stricken students was 23 percent, there had been several crystal meth and cocaine overdoses, a number of students were teen parents, test scores were abysmal, the dropout rate was 20 percent, and no graduates were going on to college.
Horn decided that project-based learning was the way to get students engaged, and picked sustainability as the school’s theme. He divided the school into five cohorts – agriculture, energy, forestry, architecture, and water – and challenged each to come up with projects that would have a tangible, positive effect on the local community. Each cohort stays with the same teacher all day, giving them complete autonomy in scheduling, including multi-day field trips. One teacher said this lets her “extend a lesson or end it and come back to it the next day and get them up and active. So many lessons lend themselves to being outside.” Horn adds, “The model is a mixture of elementary school and a master’s cohort.”
Projects have included beekeeping, farming tilapia (a breed of freshwater fish), building Aleutian kayaks and taking them out to monitor the river’s water quality, pulling invasive species of plants from the riverbank, and working on sustainable housing prototypes. Close to 60 community volunteers help out the full-time school staff of nine teachers and aides and a counselor.
What about state standards? Horn has teachers start at the top of Bloom’s taxonomy with ambitious project goals – often creating something – and work their way down to lower-order skills, mapping out the standards on a matrix. For example, students working on beekeeping might write a paper on bee behavior to address a state language-arts standard. Teachers give a battery of assessments three times a year to measure skill levels and set aside time for individual interventions. Students see the skills they lack and that motivates them to fill the gaps. For math, students use Khan Academies’ free video tutorials to achieve mastery in specific areas.
How are the results? The student pass rate on Oregon’s state reading assessment has gone from 9 percent to 52 percent and math from 18 to 36 percent, while the writing assessment pass rate is still at 28 percent. Student attendance is 90 percent, and 40 percent of graduates are enrolling in college.
“Project-Based Learning Helps At-Risk Students” by Liana Heitin in Education Week, Apr. 25, 2012 (Vol. 31, #29, p. 8-9),
8. How Washington Has Affected K-12 Education In Recent Decades
“Uncle Sam is dreadful at micromanaging what actually happens in schools and classrooms,” says Fordham Institute honcho Chester Finn Jr. in this helpful history lesson in Education Week. “What he’s best at is setting agendas and driving priorities. Through a combination of jawboning, incentivizing, regulating, mandating, forbidding, spotlighting, and subsidizing, he can significantly influence the overall direction of the K-12 system and catalyze profound changes in it (though the system is so loosely coupled that these changes occur gradually and incompletely).”
Finn believes that in each of the last seven decades, Washington (he includes all three branches of government) has had a profound effect on schools. This has happened only when (a) there was a sizable, pent-up problem in need of a large solution; (b) the problem affected the whole country; (c) the problem seemed to be amenable to the tools in the federal toolkit; and (d) the political stars aligned for long enough to make things happen. Here is Finn’s list:
- 1950s – Brown v. Board of Education struck down government-mandated racial segregation in southern schools and Sputnik spurred a ramping-up of science and math education.
- 1960s – Lyndon Johnson’s Elementary and Secondary Education Act and Economic Opportunity Act expanded federal funding of schools and launched Head Start, the Job Corps, and VISTA.
- 1970s – The Education for All Handicapped Children Act “righted another historic wrong,” says Finn, “by declaring that every youngster with disabilities is entitled to a ‘free, appropriate public education’ in the ‘least restrictive environment.’”
- 1980s – The sharply worded A Nation at Risk report shifted the nation’s priority from equity to excellence, boosting standards, tests, and results-based accountability.
- 1990s – The National Assessment of Educational Progress (NAEP) became the “first real set of standards by which to determine just ‘how good is good enough’ when it comes to student achievement in various subjects,” says Finn.
- 2000s – The No Child Left Behind Act, reauthorizing ESEA, declared that every single student should be “proficient” in reading and math, made public the breakdown of achievement by subgroup, and pushed every school to make “adequate yearly progress” or face consequences.
- 2010s – Race to the Top is spurring states and districts to jump through various reform hoops to win federal dollars.
“None of this worked as well as ardent advocates had hoped,” concludes Finn. “All brought unintended consequences, pushback, and sizable financial burdens. But American education is a very different enterprise – and for the most part a better enterprise – as a result of these game-changing initiatives from Washington.”
“When Washington Focuses on Schools” by Chester Finn Jr. in Education Week, Apr. 25, 2012 (Vol. 31, #29, p. 40, 35),
9. Teen Births Decline
In this New York Times article, Nicholas Bakalar reports that the National Center for Health Statistics found that fewer teenagers gave birth in 2010 than in any year since 1946. Birth rates among young women 15-19 fell in all but three states and in all racial, ethnic, and age groups. “I think the current generation of youth are perhaps more conscientious and cautious,” says Dr. John Santelli of Columbia University.
The Centers for Disease Control and Prevention has corroborating statistics: Since 1991, the percentage of teenagers who have ever had sex has decreased by 15 percent, the number who have had sex with four or more partners has decreased by 26 percent, and the percentage using condoms has increased by 32 percent.
These findings “may run counter to depictions of licentious teenagers on reality television,” says Bakalar, “but scientists say there can be little doubt about the data.” What accounts for these trends? Sex education, concern about sexually transmitted diseases, and perhaps the abstinence movement.
“Teenage Birth Rate Is Lowest Since 1946” by Nicholas Bakalar in The New York Times, Apr. 17, 2012,
© Copyright 2012 Marshall Memo LLC
Do you have feedback? Is anything missing? If you have comments or suggestions, if you saw an article or web item in the last week that you think should have been summarized, or if you would like to suggest additional publications that should be covered by the Marshall Memo, please e-mail: email@example.com