Jamelle Bouie: When politicians invoke the Founding Fathers, remember this

posted in: News | 0

As things stand, 48 states are set to allocate their electors in November according to the winner of the popular vote in their state. Whoever gets the most votes — no matter the margin — gets all the electors.

In the remaining two states, Maine and Nebraska, the process works a little differently. There, electoral votes are partly divvied up on a proportional basis. In Nebraska, two of its five electoral votes are given to the winner of the statewide popular vote, and the other three are given to the victor in each of the state’s three congressional districts. In Maine, two of the state’s four electoral votes go to the winner of the popular vote, and the other two are split between its two congressional districts.

In the 2020 presidential election, for only the second time since it adopted this system in 1991, Nebraska split its electoral votes between the two candidates on the ballot. Donald Trump won the state and its 1st and 3rd congressional districts, while Joe Biden won the 2nd Congressional District, representing parts of Omaha and surrounding areas.

Biden won that election with 306 electoral votes; the Nebraska elector did not make a difference. But in an exceedingly close election — say, an election between an unpopular incumbent and an equally unpopular challenger (himself a former incumbent) — it could. Which is why Nebraska Republicans have begun an effort, backed by Trump, to end its quasi-proportional allocation of electoral votes.

Nebraska Republicans seem to know that this move is a vigorous exercise in partisan venality, which is why they’ve tried to defend it with a time-honored appeal to the Founding Fathers. “It would bring Nebraska in line with 48 of our fellow states, better reflect the founders’ intent and ensure our state speaks with one unified voice in presidential elections,” Gov. Jim Pillen, a Republican, wrote in a statement. (Trump called it “a very smart letter.”)

It is well within the rights of the Nebraska Legislature to adopt the winner-take-all system that most other states use to allocate electors. But I am less interested in the substance of the change than I am in the justification for the decision. That is the common, even ubiquitous, idea that the current form of the Electoral College represents the original intent of the drafters and ratifiers of the Constitution. The problem is simple: It’s not true.

Any attempt to impute an original intent to the framers’ construction of the Electoral College runs into the basic problem that it was, even compared with everything else in the Constitution, a last-minute and hastily constructed compromise meant to get around a large set of almost intractable differences.

Delegates to the Philadelphia Convention in 1787 were consumed with argument over the method and mode of presidential selection. In the first vote taken on the issue, early in the convention, most delegates favored legislative selection. But several of the most influential delegates, among them James Madison of Virginia, thought that this threatened the separation of powers, and thus the basic structure of the new government.

Madison observed in his notes that it was “a fundamental principle of free government that the legislative, executive, and judiciary powers should be separately … and independently exercised.” As such, it was “essential, then, that the appointment of the executive should either be drawn from some source or held by some tenure that will give him a free agency with regard to the legislature.”

Madison and his like-minded allies — Pennsylvanians James Wilson and Gouverneur Morris, for example — favored a national popular vote to choose the president. Direct election by “the people” (for the most part, property-owning white men) would guarantee executive independence and filter for men of “distinguished character, or services.” On the other side were Southern delegates who thought a popular vote would put them on the losing side of presidential contests; the free population of the North was, of course, larger than the free population of the South. Still other delegates wanted the legislative option to prevail.

In “Why Do We Still Have the Electoral College?” historian Alexander Keyssar writes that more than a few other ideas bubbled up over the course of that summer. Among them: “selection by the governors of the states or by state legislatures; election by a committee of 15 legislators chosen by lot (and obliged to act as soon as they were chosen, to avoid intrigue); a popular election in which each voter cast ballots for two or three candidates, only one of whom could be from his own state: nomination of one candidate by the people of each state, with the winner to then be chosen by the national legislature.”

As the convention came to a close, the exhausted delegates finally made a choice: Someone else would have to choose. They turned the issue over to a committee on “postponed parts.” That committee, in turn, tried to chart a path of least resistance through the options at hand. First, it adopted an idea — introduced during the summer of discussion — to have electors act as intermediaries between the public and the selection of the president. In a concession to supporters of legislative selection, those electors would gather in a purpose-made body to make their decision. In a nod to the concerns of Southern delegates, the distribution of electors would be based on representation in the House and Senate.

The committee made its recommendation, and with one major modification — the House of Representatives, not the Senate, would decide in the event that no candidate earned a majority — the convention accepted it. The delegates had no real sense of how the Electoral College would work in practice. More than a few thought that most elections would be decided by the House. And in any case, they also knew that however the people chose a president, their first choice would be George Washington. To both the framers and the ratifiers, the mechanism was less important than the man.

In the first presidential elections of the American republic, the Electoral College worked mostly as designed. Some states held popular elections to choose electors; others had them selected by state legislatures. Electors cast their ballots for the man who would be president, Washington, and designated a candidate for vice president as well, John Adams (an effort that required some coordination since, until the ratification of the 12th Amendment, electors could not cast separate ballots for president and vice president). But with the full emergence of partisan politics during Washington’s second term, and his departure at its conclusion, state legislatures, essentially acting as partisan political organizations, tried to game the system.

“States,” Keyssar notes, “took advantage of the flexible constitutional architecture to switch procedures from one election to the next.” They would move from legislative selection of electors to a district-based vote to a winner-take-all election (called the general ticket) depending on which option was more likely to secure victory for the legislature’s favored candidate. Virginia, for example, switched from district elections to winner-take-all in 1800 to help Thomas Jefferson win the presidency.

As formal political parties took shape — and center stage — in American politics, more and more state legislatures adopted winner-take-all allocation of electors, in addition to taking steps to ensure that electors would not be independent of the party that chose them. By the time of Jefferson’s battle for reelection in 1804, the framers’ Electoral College — a deliberative body that would filter candidates for selection by the House — was a dead letter. In its place was an effectively new system tailored to partisan reality.

As Keyssar writes, “Candidates for president and vice president were put forward by political parties, centered in Congress; the parties also coordinated the election campaigns. Nearly everywhere the strategic goal of these campaigns was to win legislative or popular majorities within entire states — since all but four (out of 17) delivered their full complement of electoral votes to one candidate. Those votes were physically cast by electors who gathered in state capitals and served simply as messengers: they did not deliberate, discuss or ‘think.’”

The Electoral College as we know it is less a product of the insight or design of the framers and more a contingent adaptation to the political world that emerged out of the first decade of the American republic. That world would change again, in the 1820s and ’30s, with the rise of Andrew Jackson, universal white male suffrage and the mass political party. The electoral system would adjust; by 1837, not willing to lose any partisan advantage, every state (save South Carolina) would adopt winner-take-all allocation of electors by popular vote. The tally of popular votes took on new significance as well: It stood, for the winner, as a symbol of popular legitimacy, even if it didn’t contribute to the outcome of the election.

There is nothing in the Constitution that says Nebraska Republicans can’t change the way the state allocates its electoral voters. At most, if they made the change, Nebraska Republicans would be violating the informal rules of American politics, which strongly discourage this abuse of the process. Again, I think Nebraska Republicans know this, which explains their immediate appeal to the supposed intent of the framers. This is something Americans do. We use the framers — or more accurately, we use the myths and folk traditions we’ve developed around the framers — to legitimize our decisions in the present day and to try to delegitimize those of our opponents.

But whether as men or myths, the framers cannot do this. They cannot justify the choices we make while we navigate our world. The beauty, and perhaps the curse, of self-government is that it is, in fact, self-government. Our choices are our own and we must defend them on their own terms. And while it is often good and useful to look to the past for guidance, the past cannot answer our questions or tackle our problems.

Novelty may disturb men’s minds, but we are still obligated to take our circumstances on their own terms, not those of an age long settled into dust.

Jamelle Bouie writes a column for the New York Times.

Related Articles

Opinion |


Other voices: Criminal justice reform is alive. Thank conservatives

Opinion |


Robert Pearl: Why ChatGPT’s ‘memory’ will be a health care game changer

Opinion |


Other voices: Legal marijuana is making roads deadlier

Opinion |


David Brooks: The quiet magic of middle managers

Opinion |


Trudy Rubin: Will Speaker Mike Johnson stand up to Trump and allow House vote on Ukraine aid?

Real World Economics: Econ 101 explains district budgeting dilemmas

posted in: Society | 0

Edward Lotterman

The St. Paul Public School District’s need to close a projected $100 million deficit for the coming year, discussed in recent weeks, is not unique.

Similar situations prevail for school districts large and small across the state.

Moreover, the same underlying problem is hitting many other institutions, including churches, scout troops and American Legion posts.

The phenomenon is more than nationwide. Other industrialized countries face it as well, especially in Europe. For economics teachers, it illustrates at least two important introductory principles, plus several others.

One is “age structure of a population.” This deals with how all the persons in specific jurisdictions, from nations to states, cities and school districts, are divided by gender and into various age groups.

The “population pyramid,” a graphic device shaped somewhat like a Christmas tree, is a quick way to convey this demographic situation. It basically is two bar charts, male and female by age, placed back to back vertically. A quick look illustrates when there were large or small birth groups. The wider the bars, the more people there are in the population of that age, the narrower the fewer. An excellent explanation can be found by clicking here.

The second topic, that of “cost curves of the firm,” is foundational to all “production economics” dealing with how goods and services are produced, whether for profit or not.

Now, let’s apply these two principles to real-world situations.

On the first topic, consider that public school districts, congregations, rural clinics, 4H clubs and so forth all struggle with the fact that our population is aging. Even though the populations of our nation and state are higher than ever, the fractions of these populations that are of ages for schooling or for youth activities are smaller, relative to the general population, than they were a generation ago.

Do the math: In 1957, we had 4.3 million births in our nation when the population totaled 172 million. In 2022, 3.67 million babies were born to 334 million inhabitants. Births per 1,000 population were at 25 or above for most of the 1950s but are now under 14. If one excludes births to females who immigrated at any age and their first-generation daughters, that indicator drops significantly more. So as more people age, fewer people are born to replace them.

Secondly, the costs of producing almost anything: cars, corn or educated kids, are complex.

“Fixed costs” must be paid regardless of the number of “units” produced, whether bushels of corn, Sunday school attendees or high school graduates. It costs the same to light and heat a classroom whether it has five students or 40. Readers may be more familiar with it as “overhead.”

Other costs do vary with output. These “variable costs” increase or decrease based on units served or produced. Textbooks, numbers of teachers, cafeteria groceries, fuel for buses, all vary with the number of students, although not necessarily in a smooth linear fashion. Natural gas to heat a school building only becomes a variable cost when there are so few students that the building has to be shuttered. Cafeteria food costs, on the other hand, will vary greatly when more or fewer students eat lunch in a given school year.

For each of such costs, the district may want to know a per-unit-of-product amount. Say a school district needs to spend $1 million on something regardless of number of students. If there are 20,000 students, the “average fixed cost” per student is $50. Drop enrollments to 10,000 and the average fixed cost doubles to $100 per student. With 5,000 kids in the schools, each would represent a $200 share. The same calculations apply to a rural congregation dividing the insurance cost for their building or a pastor’s salary by 300 members versus 75.

Similarly, a district may want to know variable costs on a per-student basis. The number of teachers needed for an 800-student district is less than for an 8,000-student one, but the amount per student is not exactly the same. The per student cost of groceries for a 1,500-student district may be lower than for a 400-student one because of discounts for larger orders.

The problem for school districts and churches is that a high proportion of costs are fixed. One must heat buildings and periodically renew roofs whether there are 50 worshipers or 500, whether there are 800 students or 200. A classroom floor needs cleaning whether 12 kids use it or 28. Nowadays, you need a school nurse or other person qualified to manage kids’ mediations in a building with 200 or 500. Ditto for a custodian, for someone in the office answering phones, accepting deliveries and handling visitors. And you always need someone who can fix glitches with routers, servers and video projector connections.

Some “inputs,” like teachers, are “lumpy.” You can have one biology teacher or three but you cannot have 2.63. Yes, you can have half-time positions. You can have one degreed librarian cover five buildings while less-educated and less costly aides actually help students. You can teach biology every semester but offer geology or physiology only every other year. But it is not a smooth operation like gently easing up on a gas pedal. Considering whether the staffing costs are governed by a collective bargaining agreement adds a new element to the budgeting process.

Another problem is that parents and citizens in general demand certain services. Federal and state laws mandate services for special needs students who would have been shut out when I was a kid. Even in small districts, AP math  or calculus courses are demanded rather than just the algebra-geometry-trig offerings most baby-boomers got.

Yes, staffing and costs at district offices have burgeoned. For some people, including some teachers, “360 Colborne” — the street address of SPPS central administration — has become an epithet used to explain myriad problems. Bureaucracies burgeon easily but resist downsizing. A big part of St. Paul’s budget problems stem from the very predictable ending of large sums of federal funds available under the COVID-era American Rescue Plan Act of 2021. Skeptical conservatives are entirely correct when they say that “temporary” programs funded with temporary dollars inevitably become permanent to some extent because no organization wants to cut back. There always is someone who benefits from a program and doesn’t want it ended.

Don’t blame “bureaucrats” for all problems though. Every district superintendent and finance chief knows that fixed costs can be cut by closing school buildings. They also know that announcing closings of schools that have served neighborhoods for a century always touches off political firestorms. Ditto for tax referendums that could increase school funding.

The final econ idea that is useful in thinking about these challenges is that of “marginal cost,” the change in total costs with a one-unit change, up or down, in some output or input. If we lose one student, how does that change our total costs? If we added another service, how would that increase our total costs? How would it increase total costs to add one section of AP physics? How much would it decrease total costs to cut lacrosse?

Specific situations will change, but demographic changes, especially rapidly dropping birth rates everywhere, will be even more salient in this century than in the last. Knowing how economists explain what’s going on can help understand things.

Related Articles

Business |


Real World Economics: Flu in the coop – should the Fed worry?

Business |


Real World Economics: Ripple effects of a falling bridge

Business |


Real World Economics: How Haiti now reflects age-old economic teachings

Business |


Real World Economics: Why a coup at the Fed is highly unlikely

Business |


Real World Economics: Trump’s stated trade policy would have bad tradeoffs

St. Paul economist and writer Edward Lotterman can be reached at stpaul@edlotterman.com.

Other voices: Criminal justice reform is alive. Thank conservatives

posted in: News | 0

Congress eliminated parole from the federal criminal justice system in 1984, but it didn’t completely do away with post-release supervision. About 3 of every 4 people leaving federal prison remain under supervision, often for years, and often for no good reason.

A transition period to ensure successful re-entry into society after prison makes sense. But federal supervised release — a parole-like period of restrictions post-prison — lasts too long and is too expensive. It makes little distinction between those who are at high risk to break the law again and those at negligible risk. And there is mounting evidence that the longer supervision goes on, the greater the chance that the former prisoner will get into trouble again. The extended lack of real freedom interrupts transition to responsible post-prison behavior.

The Safer Supervision Act is a bipartisan bill that would shorten post-prison supervision upon a showing that public safety would not be negatively affected.

It is similar in spirit to the First Step Act, another bipartisan federal criminal justice reform that was signed into law in 2018 by President Trump. The act reduced excessive federal prison sentences while encouraging rehabilitation. It was one of the few truly bipartisan successes in years, a result of efforts by CNN commentator Van Jones and U.S. Sen. Cory Booker, a Democrat from New Jersey.

And, importantly, reform-oriented Republicans.

Liberals could too easily mislead themselves into believing that tough-on-crime conservative lawmakers signed on to the First Step Act six years ago and are signing on to the Safer Supervision Act now as reform newbies. That’s a far cry from the truth.

Criminal justice reform has deep roots in political conservatism. Some of the most meaningful recent sentencing reforms have come from states like Texas, Georgia and South Carolina. The organization Right on Crime and other conservative reform groups draw on religious traditions that stress repentance and forgiveness, plus a deep concern over government expansion and waste — including in public safety and punishment.

Writing in favor of the Safer Supervision Act, former House Speaker Newt Gingrich, a Georgia Republican, emphasized the conservative critique of an expansive carceral system.

“Our nation’s public safety systems are not immune from the bloat, waste, and ineffectiveness that naturally grows in massive government operations,” Gingrich wrote.

You don’t have to be a fan of Gingrich or his politics to appreciate his support for badly needed changes in a criminal justice system with far too large a footprint and too little benefit to show for it.

A decade ago, Gingrich joined with the late Malibu billionaire B. Wayne Hughes Jr., in writing a Los Angeles Times op-ed in favor of Proposition 47, the California reform that right-sizes drug offenses and small property crimes. Hughes, a staunch conservative, founded and ran an organization that assisted crime victims and former offenders. He was one of the biggest donors to the Proposition 47 campaign.

The two noted that Texas reversed prison expansion in 2007, saved billions of dollars and used the savings on drug treatment and mental health services. Texas reset the dividing line between misdemeanor and felony theft at $2,500 (California’s is $950 — well short of Texas levels).

Ohio, Oklahoma, Kentucky, Missouri and Mississippi — all red states, Gingrich and Hughes noted — adopted their own reforms along the lines of Texas’.

“Now voters in California will have a chance to do the same, using costly prison beds for dangerous and hardened criminals,” Gingrich and Hughes wrote. “It is time to stop wasting taxpayer dollars on locking up low-level offenders.”

Today’s election-year posturing has clouded the facts and original politics of criminal justice reform. Some elected Democrats, fearing for their political lives, embrace false connections between smart reforms and periodic spikes in crime. Some elected Republicans– especially in California– betray the conservative reform principles articulated by Gingrich, U.S. Sen. Rand Paul (R-Kentucky) and others to seek backing from law enforcement and other groups that see political gain in embracing fear.

But even law enforcement organizations such as the Major Cities Chiefs Assn. have joined with prosecutors, defense lawyers, religious groups and progressive reformers to embrace the Safer Supervision Act.

Truth be told, the reform doesn’t go far enough. But that’s no reason to reject it. The First Step Act made it clear by its name that more reform steps were needed. But they are to be taken one at a time, as conservatives and liberals, Democrats and Republicans, seek common ground. Passing the Safer Supervision Act is a step that Congress ought to take now.

— The Los Angeles Times editorial board

Related Articles

Opinion |


Robert Pearl: Why ChatGPT’s ‘memory’ will be a health care game changer

Opinion |


Other voices: Legal marijuana is making roads deadlier

Opinion |


David Brooks: The quiet magic of middle managers

Opinion |


Trudy Rubin: Will Speaker Mike Johnson stand up to Trump and allow House vote on Ukraine aid?

Opinion |


Karl W. Smith: The Trump tax cuts were neither panacea nor rip-off

Robert Pearl: Why ChatGPT’s ‘memory’ will be a health care game changer

posted in: News | 0

OpenAI generated massive media interest with the announcement that its signature product, ChatGPT, is gaining memory. The new feature enables the generative artificial intelligence system to “carry what it learns between chats, allowing it to provide more relevant responses,” according to the company.

As Congress holds hearings and regulators rumble with apprehension, the media coverage so far has generally overlooked the biggest part of this announcement, which has direct ties to American health care:

The development of memory-powered AI is a pivotal step toward transforming U.S. medicine.

Although there are many technological and regulatory hurdles to clear — and fears around privacy and security to mitigate — this development has the potential to make health care more personalized, patient-centric and affordable. These improvements — alongside the potential pitfalls of AI-empowered health care — are the subject of my upcoming book, “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine.”

Here are three ways generative AI’s improved memory will transform patient care:

More accurate diagnoses

For over a decade, clinicians have wanted to precisely tailor care to each patient’s unique health profile, including their genetic makeup and personal health preferences. But too much has stood in the way.

One major challenge is the sheer volume of knowledge required to customize medical care. The human genome consists of approximately 3 billion base pairs of DNA, which if typed out as letters would fill about 200 New York City phone books. What’s more, medical knowledge doubles every 73 days, making it almost impossible for any human to keep up with all the innovative medical findings and updated guidelines for helping patients.

A third hurdle is technological. With the average patient consulting 19 different doctors throughout their lifetime, an individual’s electronic medical records are often dispersed across numerous medical offices and health systems. The lack of interoperability among EMR systems compounds this issue, preventing clinicians –and, by extension, generative AI — from accessing a patient’s complete medical history.

Currently, ChatGPT’s ” context window” (how many words it can recall before losing its memory), falls well short of the nearly 17,000 words found in the average patient’s medical record.

However, generative AI systems are predicted to become 30 times more powerful within the next five years, dramatically expanding their data retention capabilities and enhancing their reliability. This, combined with OpenAI’s specialized plug-ins (known as GPTs), offers promising opportunities. Initially, generative AI might access a limited set of patient data through platforms like My Chart, which can be used on personal computers or smartphones. Eventually, however, generative AI will enable patients to consolidate their digital medical records from various health care providers.

This will create a comprehensive, personalized health record, serving as a reliable resource for both patients and their health care teams.

With this information stored in an AI’s memory, patients will be able to input their symptoms and receive specific diagnostic and treatment suggestions.

For people who are uncertain about the significance or urgency of new symptoms, the AI will provide reliable advice. And for patients with rare or complex conditions, it will offer invaluable second opinions. Advanced diagnostic ability, alongside comprehensive health care information, will be instrumental in reducing the 400,000 annual deaths attributed to misdiagnoses.

Fewer complications from chronic disease

Chronic diseases like diabetes, hypertension, obesity and asthma affect six in 10 U.S. adults. Complications from these diseases account for 1.7 million deaths each year.

Unlike acute illnesses that appear suddenly and usually are resolved quickly, chronic conditions persist over time, impacting tens of millions of Americans every single day.

Doctors care for these conditions in an episodic fashion, which is far from optimal. Patients with chronic diseases typically see their physician every three to four months, providing doctors with only a snapshot of their health status. As a result, chronic diseases aren’t controlled as well as they should be, which leads to life-threatening, and preventable, complications.

At a national level, hypertension is adequately controlled just 60 percent of the time, and effective blood sugar management in type 2 diabetes is achieved less than half the time. Data from the Centers for Disease Control and Prevention indicate that proper disease prevention and management approaches would reduce the risk of kidney failure, heart attacks and strokes by 30 percent to 50 percent.

Applying these percentages to the U.S. death toll from chronic disease complications, these CDC estimates indicate that more than half a million lives could be saved annually.

Generative AI, once connected to home wearable devices, can update patients about their health status and suggest medication adjustments or lifestyle changes. It can also remind them about necessary screenings and even facilitate testing appointments and transportation, thereby improving disease management, reducing complications and maximizing health outcomes.

Safer hospitals

Generative AI with memory will radically improve inpatient care, as well. Once it’s integrated with bedside monitors and able to remember a patient’s clinical status over time, the AI system will be able to immediately alert professionals when a problem arises, so they can intervene.

Additionally, video monitoring systems powered by AI could oversee the delivery of medical care, pinpointing any departures from established best practices. This real-time oversight would provide immediate alerts to caregivers, preventing medication mishaps and reducing the risk of infection.

These two uses of AI technology would help reduce the staggering 250,000 deaths each year attributed to preventable medical errors.

While ChatGPT and similar technologies hold immense potential, today’s generative AI tools still require clinician supervision. But looking ahead, the exponential growth of generative AI’s capabilities (doubling every year) points to a transformative future for the practice of medicine.

Now is the time for both clinicians and patients to become comfortable using generative AI. And it is an opportunity for regulators and elected officials to advance, not stifle its potential. With memory and GPTs, the doctor’s AI toolkit is quickly filling up.

Robert Pearl is a clinical professor of plastic surgery at the Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group. He wrote this column for the Fulcrum, a nonprofit, nonpartisan news platform covering efforts to fix our governing systems.

Related Articles

Opinion |


Other voices: Legal marijuana is making roads deadlier

Opinion |


David Brooks: The quiet magic of middle managers

Opinion |


Trudy Rubin: Will Speaker Mike Johnson stand up to Trump and allow House vote on Ukraine aid?

Opinion |


Karl W. Smith: The Trump tax cuts were neither panacea nor rip-off

Opinion |


Sarah Green Carmichael: The ‘Great Wealth Transfer’ is a delusion