Review the senior dating agency

Bush Lets U.S. Spy on Callers Without Courts

WASHINGTON, Dec. 15 - Months after the Sept. 11 attacks, President Bush secretly authorized the National Security Agency to eavesdrop on Americans and others inside the United States to search for evidence of terrorist activity without the court-approved warrants ordinarily required for domestic spying, according to government officials.

Under a presidential order signed in 2002, the intelligence agency has monitored the international telephone calls and international e-mail messages of hundreds, perhaps thousands, of people inside the United States without warrants over the past three years in an effort to track possible "dirty numbers" linked to Al Qaeda, the officials said. The agency, they said, still seeks warrants to monitor entirely domestic communications.

The previously undisclosed decision to permit some eavesdropping inside the country without court approval was a major shift in American intelligence-gathering practices, particularly for the National Security Agency, whose mission is to spy on communications abroad. As a result, some officials familiar with the continuing operation have questioned whether the surveillance has stretched, if not crossed, constitutional limits on legal searches.

"This is really a sea change," said a former senior official who specializes in national security law. "It's almost a mainstay of this country that the N.S.A. only does foreign searches."

Nearly a dozen current and former officials, who were granted anonymity because of the classified nature of the program, discussed it with reporters for The New York Times because of their concerns about the operation's legality and oversight.

According to those officials and others, reservations about aspects of the program have also been expressed by Senator John D. Rockefeller IV, the West Virginia Democrat who is the vice chairman of the Senate Intelligence Committee, and a judge presiding over a secret court that oversees intelligence matters. Some of the questions about the agency's new powers led the administration to temporarily suspend the operation last year and impose more restrictions, the officials said.

The Bush administration views the operation as necessary so that the agency can move quickly to monitor communications that may disclose threats to the United States, the officials said. Defenders of the program say it has been a critical tool in helping disrupt terrorist plots and prevent attacks inside the United States.

Administration officials are confident that existing safeguards are sufficient to protect the privacy and civil liberties of Americans, the officials say. In some cases, they said, the Justice Department eventually seeks warrants if it wants to expand the eavesdropping to include communications confined within the United States. The officials said the administration had briefed Congressional leaders about the program and notified the judge in charge of the Foreign Intelligence Surveillance Court, the secret Washington court that deals with national security issues.

The White House asked The New York Times not to publish this article, arguing that it could jeopardize continuing investigations and alert would-be terrorists that they might be under scrutiny. After meeting with senior administration officials to hear their concerns, the newspaper delayed publication for a year to conduct additional reporting. Some information that administration officials argued could be useful to terrorists has been omitted.

While many details about the program remain secret, officials familiar with it say the N.S.A. eavesdrops without warrants on up to 500 people in the United States at any given time. The list changes as some names are added and others dropped, so the number monitored in this country may have reached into the thousands since the program began, several officials said. Overseas, about 5,000 to 7,000 people suspected of terrorist ties are monitored at one time, according to those officials.

Several officials said the eavesdropping program had helped uncover a plot by Iyman Faris, an Ohio trucker and naturalized citizen who pleaded guilty in 2003 to supporting Al Qaeda by planning to bring down the Brooklyn Bridge with blowtorches. What appeared to be another Qaeda plot, involving fertilizer bomb attacks on British pubs and train stations, was exposed last year in part through the program, the officials said. But they said most people targeted for N.S.A. monitoring have never been charged with a crime, including an Iranian-American doctor in the South who came under suspicion because of what one official described as dubious ties to Osama bin Laden.

The eavesdropping program grew out of concerns after the Sept. 11 attacks that the nation's intelligence agencies were not poised to deal effectively with the new threat of Al Qaeda and that they were handcuffed by legal and bureaucratic restrictions better suited to peacetime than war, according to officials. In response, President Bush significantly eased limits on American intelligence and law enforcement agencies and the military.

But some of the administration's antiterrorism initiatives have provoked an outcry from members of Congress, watchdog groups, immigrants and others who argue that the measures erode protections for civil liberties and intrude on Americans' privacy.

Opponents have challenged provisions of the USA Patriot Act, the focus of contentious debate on Capitol Hill this week, that expand domestic surveillance by giving the Federal Bureau of Investigation more power to collect information like library lending lists or Internet use. Military and F.B.I. officials have drawn criticism for monitoring what were largely peaceful antiwar protests. The Pentagon and the Department of Homeland Security were forced to retreat on plans to use public and private databases to hunt for possible terrorists. And last year, the Supreme Court rejected the administration's claim that those labeled "enemy combatants" were not entitled to judicial review of their open-ended detention.

Mr. Bush's executive order allowing some warrantless eavesdropping on those inside the United States -- including American citizens, permanent legal residents, tourists and other foreigners -- is based on classified legal opinions that assert that the president has broad powers to order such searches, derived in part from the September 2001 Congressional resolution authorizing him to wage war on Al Qaeda and other terrorist groups, according to the officials familiar with the N.S.A. operation.

The National Security Agency, which is based at Fort Meade, Md., is the nation's largest and most secretive intelligence agency, so intent on remaining out of public view that it has long been nicknamed "No Such Agency." It breaks codes and maintains listening posts around the world to eavesdrop on foreign governments, diplomats and trade negotiators as well as drug lords and terrorists. But the agency ordinarily operates under tight restrictions on any spying on Americans, even if they are overseas, or disseminating information about them.

What the agency calls a "special collection program" began soon after the Sept. 11 attacks, as it looked for new tools to attack terrorism. The program accelerated in early 2002 after the Central Intelligence Agency started capturing top Qaeda operatives overseas, including Abu Zubaydah, who was arrested in Pakistan in March 2002. The C.I.A. seized the terrorists' computers, cellphones and personal phone directories, said the officials familiar with the program. The N.S.A. surveillance was intended to exploit those numbers and addresses as quickly as possible, they said.

In addition to eavesdropping on those numbers and reading e-mail messages to and from the Qaeda figures, the N.S.A. began monitoring others linked to them, creating an expanding chain. While most of the numbers and addresses were overseas, hundreds were in the United States, the officials said.

Under the agency's longstanding rules, the N.S.A. can target for interception phone calls or e-mail messages on foreign soil, even if the recipients of those communications are in the United States. Usually, though, the government can only target phones and e-mail messages in the United States by first obtaining a court order from the Foreign Intelligence Surveillance Court, which holds its closed sessions at the Justice Department.

Traditionally, the F.B.I., not the N.S.A., seeks such warrants and conducts most domestic eavesdropping. Until the new program began, the N.S.A. typically limited its domestic surveillance to foreign embassies and missions in Washington, New York and other cities, and obtained court orders to do so.

Since 2002, the agency has been conducting some warrantless eavesdropping on people in the United States who are linked, even if indirectly, to suspected terrorists through the chain of phone numbers and e-mail addresses, according to several officials who know of the operation. Under the special program, the agency monitors their international communications, the officials said. The agency, for example, can target phone calls from someone in New York to someone in Afghanistan.

Warrants are still required for eavesdropping on entirely domestic-to-domestic communications, those officials say, meaning that calls from that New Yorker to someone in California could not be monitored without first going to the Federal Intelligence Surveillance Court.

After the special program started, Congressional leaders from both political parties were brought to Vice President Dick Cheney's office in the White House. The leaders, who included the chairmen and ranking members of the Senate and House intelligence committees, learned of the N.S.A. operation from Mr. Cheney, Lt. Gen. Michael V. Hayden of the Air Force, who was then the agency's director and is now a full general and the principal deputy director of national intelligence, and George J. Tenet, then the director of the C.I.A., officials said.

It is not clear how much the members of Congress were told about the presidential order and the eavesdropping program. Some of them declined to comment about the matter, while others did not return phone calls.

Later briefings were held for members of Congress as they assumed leadership roles on the intelligence committees, officials familiar with the program said. After a 2003 briefing, Senator Rockefeller, the West Virginia Democrat who became vice chairman of the Senate Intelligence Committee that year, wrote a letter to Mr. Cheney expressing concerns about the program, officials knowledgeable about the letter said. It could not be determined if he received a reply. Mr. Rockefeller declined to comment. Aside from the Congressional leaders, only a small group of people, including several cabinet members and officials at the N.S.A., the C.I.A. and the Justice Department, know of the program.

Some officials familiar with it say they consider warrantless eavesdropping inside the United States to be unlawful and possibly unconstitutional, amounting to an improper search. One government official involved in the operation said he privately complained to a Congressional official about his doubts about the program's legality. But nothing came of his inquiry. "People just looked the other way because they didn't want to know what was going on," he said.

A senior government official recalled that he was taken aback when he first learned of the operation. "My first reaction was, 'We're doing what?' " he said. While he said he eventually felt that adequate safeguards were put in place, he added that questions about the program's legitimacy were understandable.

Some of those who object to the operation argue that is unnecessary. By getting warrants through the foreign intelligence court, the N.S.A. and F.B.I. could eavesdrop on people inside the United States who might be tied to terrorist groups without skirting longstanding rules, they say.

Thank you for subscribing.

You are already subscribed to this email.

The standard of proof required to obtain a warrant from the Foreign Intelligence Surveillance Court is generally considered lower than that required for a criminal warrant -- intelligence officials only have to show probable cause that someone may be "an agent of a foreign power," which includes international terrorist groups -- and the secret court has turned down only a small number of requests over the years. In 2004, according to the Justice Department, 1,754 warrants were approved. And the Foreign Intelligence Surveillance Court can grant emergency approval for wiretaps within hours, officials say.

Administration officials counter that they sometimes need to move more urgently, the officials said. Those involved in the program also said that the N.S.A.'s eavesdroppers might need to start monitoring large batches of numbers all at once, and that it would be impractical to seek permission from the Foreign Intelligence Surveillance Court first, according to the officials.

The N.S.A. domestic spying operation has stirred such controversy among some national security officials in part because of the agency's cautious culture and longstanding rules.

Widespread abuses -- including eavesdropping on Vietnam War protesters and civil rights activists -- by American intelligence agencies became public in the 1970's and led to passage of the Foreign Intelligence Surveillance Act, which imposed strict limits on intelligence gathering on American soil. Among other things, the law required search warrants, approved by the secret F.I.S.A. court, for wiretaps in national security cases. The agency, deeply scarred by the scandals, adopted additional rules that all but ended domestic spying on its part.

After the Sept. 11 attacks, though, the United States intelligence community was criticized for being too risk-averse. The National Security Agency was even cited by the independent 9/11 Commission for adhering to self-imposed rules that were stricter than those set by federal law.

Several senior government officials say that when the special operation began, there were few controls on it and little formal oversight outside the N.S.A. The agency can choose its eavesdropping targets and does not have to seek approval from Justice Department or other Bush administration officials. Some agency officials wanted nothing to do with the program, apparently fearful of participating in an illegal operation, a former senior Bush administration official said. Before the 2004 election, the official said, some N.S.A. personnel worried that the program might come under scrutiny by Congressional or criminal investigators if Senator John Kerry, the Democratic nominee, was elected president.

In mid-2004, concerns about the program expressed by national security officials, government lawyers and a judge prompted the Bush administration to suspend elements of the program and revamp it.

For the first time, the Justice Department audited the N.S.A. program, several officials said. And to provide more guidance, the Justice Department and the agency expanded and refined a checklist to follow in deciding whether probable cause existed to start monitoring someone's communications, several officials said.

A complaint from Judge Colleen Kollar-Kotelly, the federal judge who oversees the Federal Intelligence Surveillance Court, helped spur the suspension, officials said. The judge questioned whether information obtained under the N.S.A. program was being improperly used as the basis for F.I.S.A. wiretap warrant requests from the Justice Department, according to senior government officials. While not knowing all the details of the exchange, several government lawyers said there appeared to be concerns that the Justice Department, by trying to shield the existence of the N.S.A. program, was in danger of misleading the court about the origins of the information cited to justify the warrants.

One official familiar with the episode said the judge insisted to Justice Department lawyers at one point that any material gathered under the special N.S.A. program not be used in seeking wiretap warrants from her court. Judge Kollar-Kotelly did not return calls for comment.

A related issue arose in a case in which the F.B.I. was monitoring the communications of a terrorist suspect under a F.I.S.A.-approved warrant, even though the National Security Agency was already conducting warrantless eavesdropping.

According to officials, F.B.I. surveillance of Mr. Faris, the Brooklyn Bridge plotter, was dropped for a short time because of technical problems. At the time, senior Justice Department officials worried what would happen if the N.S.A. picked up information that needed to be presented in court. The government would then either have to disclose the N.S.A. program or mislead a criminal court about how it had gotten the information.

Several national security officials say the powers granted the N.S.A. by President Bush go far beyond the expanded counterterrorism powers granted by Congress under the USA Patriot Act, which is up for renewal. The House on Wednesday approved a plan to reauthorize crucial parts of the law. But final passage has been delayed under the threat of a Senate filibuster because of concerns from both parties over possible intrusions on Americans' civil liberties and privacy.

Under the act, law enforcement and intelligence officials are still required to seek a F.I.S.A. warrant every time they want to eavesdrop within the United States. A recent agreement reached by Republican leaders and the Bush administration would modify the standard for F.B.I. wiretap warrants, requiring, for instance, a description of a specific target. Critics say the bar would remain too low to prevent abuses.

Bush administration officials argue that the civil liberties concerns are unfounded, and they say pointedly that the Patriot Act has not freed the N.S.A. to target Americans. "Nothing could be further from the truth," wrote John Yoo, a former official in the Justice Department's Office of Legal Counsel, and his co-author in a Wall Street Journal opinion article in December 2003. Mr. Yoo worked on a classified legal opinion on the N.S.A.'s domestic eavesdropping program.

At an April hearing on the Patriot Act renewal, Senator Barbara A. Mikulski, Democrat of Maryland, asked Attorney General Alberto R. Gonzales and Robert S. Mueller III, the director of the F.B.I., "Can the National Security Agency, the great electronic snooper, spy on the American people?"

"Generally," Mr. Mueller said, "I would say generally, they are not allowed to spy or to gather information on American citizens."

President Bush did not ask Congress to include provisions for the N.S.A. domestic surveillance program as part of the Patriot Act and has not sought any other laws to authorize the operation. Bush administration lawyers argued that such new laws were unnecessary, because they believed that the Congressional resolution on the campaign against terrorism provided ample authorization, officials said.

Seeking Congressional approval was also viewed as politically risky because the proposal would be certain to face intense opposition on civil liberties grounds. The administration also feared that by publicly disclosing the existence of the operation, its usefulness in tracking terrorists would end, officials said.

The legal opinions that support the N.S.A. operation remain classified, but they appear to have followed private discussions among senior administration lawyers and other officials about the need to pursue aggressive strategies that once may have been seen as crossing a legal line, according to senior officials who participated in the discussions.

For example, just days after the Sept. 11, 2001, attacks on New York and the Pentagon, Mr. Yoo, the Justice Department lawyer, wrote an internal memorandum that argued that the government might use "electronic surveillance techniques and equipment that are more powerful and sophisticated than those available to law enforcement agencies in order to intercept telephonic communications and observe the movement of persons but without obtaining warrants for such uses."

Mr. Yoo noted that while such actions could raise constitutional issues, in the face of devastating terrorist attacks "the government may be justified in taking measures which in less troubled conditions could be seen as infringements of individual liberties."

The next year, Justice Department lawyers disclosed their thinking on the issue of warrantless wiretaps in national security cases in a little-noticed brief in an unrelated court case. In that 2002 brief, the government said that "the Constitution vests in the President inherent authority to conduct warrantless intelligence surveillance (electronic or otherwise) of foreign powers or their agents, and Congress cannot by statute extinguish that constitutional authority."

Administration officials were also encouraged by a November 2002 appeals court decision in an unrelated matter. The decision by the Foreign Intelligence Surveillance Court of Review, which sided with the administration in dismantling a bureaucratic "wall" limiting cooperation between prosecutors and intelligence officers, cited "the president's inherent constitutional authority to conduct warrantless foreign intelligence surveillance."

But the same court suggested that national security interests should not be grounds "to jettison the Fourth Amendment requirements" protecting the rights of Americans against undue searches. The dividing line, the court acknowledged, "is a very difficult one to administer."

Correction: December 28, 2005, Wednesday Because of an editing error, a front-page article on Dec. 16 about a decision by President Bush to authorize the National Security Agency to eavesdrop on Americans and others inside the United States to search for evidence of terrorist activity without warrants ordinarily required for domestic spying misstated the name of the court that would normally issue those warrants. It is the Foreign -- not Federal --Intelligence Surveillance Court.

Barclay Walsh contributed research for this article.

We’re interested in your feedback on this page. Tell us what you think.

National Review

Editor’s note: This piece was originally published by Arc Digital. It is reprinted here with permission.

O n Monday, a man tried to set off a suicide bomb in an underground passageway connecting New York’s Port Authority and Times Square subway stations. The homemade pipe bomb he strapped to his chest exploded, but not as much as he’d planned. No one but the bomber was seriously hurt.

The incident highlights two things: the seriousness of the jihadist threat and the popular tendency to exaggerate it.

I’ve been teaching classes on terrorism for over ten years, and have come across numerous cases of failed attacks — not attempts thwarted by security services, but opportunities to kill people that failed thanks to incompetence.

One of the best examples is another case from New York City. Faisal Shahzad, an American citizen, tried to attack Times Square on May 1, 2010. Shahzad attempted to set off a car bomb in his SUV, which he left parked with the engine on and hazard lights blinking. Nearby street vendors noticed smoke coming from the vehicle and heard firecrackers going off inside. They alerted the NYPD, which evacuated the area and dismantled the bomb. The SUV caught fire, but never detonated.

Here’s the best part: Even if Shahzad’s bomb had gone off, the explosion would have been limited. Along with alarm-clock triggers, M-88 firecrackers, and two five-gallon containers of gasoline, the disposal team found 250 pounds of fertilizer. Shahzad likely got the idea from the 1995 Oklahoma City bombing. But Timothy McVeigh and Terry Nichols employed volatile ammonium nitrate fertilizer, the sort farms disperse over large fields. Shahzad used urea fertilizer designed for home gardens.

He basically filled his car with dirt. It the bomb had gone off, the fertilizer would have absorbed some of the explosion, rather than magnifying it.

Here’s another example: In 2007, two would-be suicide bombers filled a jeep with propane and crashed into a terminal at the Glasgow airport. They damaged some doors, but they couldn’t get past a security barrier, and they hurt no one but themselves (one died from the burns).

So it’s important to keep in mind how rare a successful terrorist will be:

Few people hold extremist beliefs.

Of those, only some advocate violence.

Of those, only some are willing to carry out an attack.

Of those, only some can acquire the means to do so.

Of those, only some are psychologically capable of going through with it.

Of those, only some are crafty or lucky enough to avoid getting caught in advance.

And of those, some screw it up.

When it comes to terrorism, the United States is primarily concerned with jihadists — namely al-Qaeda, ISIS, and their affiliates and sympathizers. Considering the small subset of people who could make for successful terrorists, and the efforts of America’s police, military, and intelligence services, it’s not surprising few attacks have succeeded.

In the 16 years since September 11, there have been eight deadly jihadist attacks in the United States — including the Fort Hood shooting, the Boston Marathon bombing, the San Bernardino shooting, and the vehicle attack in New York City earlier this year — killing a total of 95.

To put that in perspective, in this year alone there have been 406 incidents in which four or more people were shot. This includes injuries in addition to fatalities, but 586 people died in these attacks.

Looking at the paucity of deadly jihadist attacks in the United States, political scientist John Mueller argues that America is suffering under a “terrorism delusion.” More Americans die each year from accidentally drowning in bathtubs. According to Mueller, this shows we’re grossly overstating the risk of jihadist terrorism, and devoting far too many resources to preventing it.

The costs of a terrorist attack extend beyond the immediate victims.

There are two problems with this logic. First, Mueller fails to consider the possibility that the reason there aren’t many terrorist attacks is America’s efforts to prevent them. After all, more Americans die each year from bathtub accidents than from nuclear bombs, but that doesn’t mean we should be unconcerned about the latter.

Additionally, the costs of a terrorist attack extend beyond the immediate victims. If someone accidentally drowns in a bathtub, it hurts that individual, along with people close to them. But terrorist attacks cause greater psychological, political, and economic damage.

Consider the Boston Marathon bombing. The attack killed three people, which Mueller would label a relatively small cost. However, the bombs also injured over 200, all of whom required medical care. Some, such as those who lost legs, will endure reduced productivity and ongoing medical costs throughout their lives, while many more suffered psychological harm, adding additional ongoing costs to the total. To that number we have to add the money spent on police overtime and other expenses associated with the ensuing manhunt, as well as the lost productivity from shutting down the economic center of Boston. And those are just the measurable financial costs that Mueller ignores.

In addition to harming the victims and their friends and families, the Boston Marathon bombing hurt the city of Boston and the surrounding area, the long-distance-racing community, Americans in general, and many others worldwide. The United States may exaggerate the threat of terrorism, but it is not unreasonable to treat the cost of a terrorist attack as considerably higher than the cost of a similar number of deaths from bathtub drownings, lightning strikes, and other more frequent but less resonant causes of death.

To some extent, that’s true of mass shootings as well, even those that are not political in nature (and thus terrorist attacks in themselves). Though these two types of violence are not identical, they’re more similar to each other than to bathtub drownings.

Nevertheless, terrorism has the ability to upend the political landscape, potentially baiting countries into self-defeating actions abroad or excessive restrictions on liberty at home. Jihadism is threatening because it’s an international movement involving organizations capable of professional operations — 9/11, the 2015 Paris attacks — and inspiring self-starters.

Akayed Ullah — the 27-year-old who failed to kill anyone in the NYC subway attack — is one of those self-starters. He told investigators he did it in retaliation for U.S. airstrikes against ISIS in Syria and elsewhere, but he was radicalized online and acted on his own, without training or instructions. ISIS found out about Ullah the same way the rest of us did.

Self-starters are very difficult to stop in advance, because they lack connections to known terrorists that intelligence agencies can trace. Even those on the FBI’s radar — such as Boston Marathon bomber Tamerlan Tsarnaev, or Orlando night-club shooter Omar Mateen — usually do not commit jailable offenses before executing their attacks.

Unlike al-Qaeda, which prioritizes professional operations — and hasn’t managed to execute one in the West in years — ISIS strongly encourages self-starters. Though they’re not in direct contact, the group makes sympathizers such as Ullah feel they are part of a larger movement, thinking globally but acting locally. And ISIS’s Al Hayat Media Center draws attention to individual attacks, increasing their political impact and inspiring future self-starters.

That transnational component — and the possibility that trained operatives will execute a larger attack, perhaps with a weapon of mass destruction — elevates the threat to a genuine national security concern.

President Trump responded to the failed NYC attack with calls for tighter immigration laws. Akayed Ullah was born in Bangladesh and has been living in Brooklyn for seven years. He’s a legal permanent resident with an F43 visa, which is available only to children or siblings of American citizens. After the bombing, Trump denounced this policy:

Today’s terror suspect entered our country through extended-family chain migration, which is incompatible with national security. . . . America must fix its lax immigration system, which allows far too many dangerous, inadequately vetted people to access our country.

Ullah is the only known jihadist who entered the United States through chain migration, and Bangladesh is not one of the six Muslim-majority countries included in the most recent iteration of Trump’s travel ban. And no one from those countries — Syria, Iran, Somalia, Yemen, Libya, and Chad — has ever executed a deadly terrorist attack on U.S. soil.

The point is not that Bangladesh should be added to the ban. Nor should Saudi Arabia, home to 15 of the September 11 hijackers, or Pakistan, where Faisal Shahzad was born. The point is that banning entire countries is poor counterterrorism strategy.

Assuming that everyone of a particular national origin — or religion — is a security threat is like trying to kill a fly with a 20,000-pound bomb. At best it’s excessive. At worst, it’s actively detrimental.

For example, Chad — which Trump added to his third ban after courts blocked the first two — is one of the United States’ most important counterterrorism partners in Africa. The Chadians fight Boko Haram, which pledged loyalty to ISIS, as well as other local ISIS and al-Qaeda affiliates. Trump jeopardized this cooperation despite no known cases of Chadians even attempting a terrorist attack against an American target.

It’s hard getting countries to share intelligence and conduct joint operations after effectively announcing, “None of you people can be trusted.”

The travel ban boosts jihadist propaganda, enhances recruitment, motivates self-starters, and discourages individual Muslims and various governments from cooperating with the United States while doing very little (if anything) to reduce the risk of terrorism. On balance, it probably makes the problem worse.

It’s hard getting countries to share intelligence and conduct joint operations after effectively announcing, ‘None of you people can be trusted.’

That’s why it’s important to keep the threat in perspective.

If foreign jihadists really were streaming into the United States, then there might be an argument for some sort of travel ban. But given the relative rarity of successful attacks, it’s clear America’s screening procedures, intelligence monitoring, military action abroad, and security efforts at home have a handle on the problem.

Jihadists are a serious threat, but far from an overwhelming one. The United States should respond with a robust counterterrorism strategy. The goal is zero successful attacks, though that may be impossible to achieve. Overreacting, however, damages American interests.

No reason to do the terrorists’ work for them.

— Nicholas Grossman is an assistant professor of political science at the University of Illinois.

I f, as the ancient philosopher Polemarchus and the labor leader Samuel Gompers agreed, justice and politics are about rewarding friends and punishing enemies, then I can see the appeal to the House Republican caucus of taxing the tuition waivers that universities grant to Ph.D. students. After all, Vegas bookies list doctoral students as the odds-on favorite in any contest for the group of people least likely to support the GOP on anything, ever, for any reason. Likewise, a green-eyeshade analyst who lacks vindictiveness but is enamored of tidiness of mind could favor taxing tuition waivers on general accounting principles.

But these inclinations should be weighed against the substantial risks to a sector where America decisively leads the world. Depending on which ranking you use, American universities take between 17 and 19 of the top 25 slots in rankings of worldwide universities. At the top levels, what matters is research, so this ranking is almost entirely measuring the strength of our graduate programs.

As a rule, doctoral students at American universities enter on merit fellowships, with two major components to their fellowship: stipends and tuition waivers. Stipend is the student’s living expenses. In my field of sociology, stipends at top programs range from about $17,000 to $28,000, with public universities clustering at the lower end and private universities towards the top. Tuition waivers negate the sticker-price tuition, which is typically about $14,000 at public universities and $45,000–$50,000 at private universities. Current law and the Senate bill tax only the stipends (that is, income the student sees), whereas the House version treats the tuition waiver as taxable in-kind income.

If we imagine a Ph.D. student at a public university who gets a $17,000 stipend and a $14,000 tuition waiver, plus a health plan, then the House bill would raise this student’s taxes by $2,112. A Ph.D. student at a private university who gets a $28,000 stipend and $50,000 tuition waiver, plus a health plan, would see his taxes go up by $10,752, cutting his take-home pay by more than a third.

In the short run, the impact of the House plan would be that doctoral students either drop out or take on significant debt. Those who complete their Ph.D.s might not be the ones doing the best research but those with the least need for income (such as childless students), students with other household income, and those with the least attractive job opportunities if they leave grad school prematurely.

The more interesting thing to consider is how universities would adapt. The most immediate change is one that schools could make unilaterally at the department level: admit fewer graduate students so that those who are admitted could receive higher stipends to compensate for the tax on tuition waivers. Universities could also find substitutes for graduate-student labor: adjuncts for teaching; postdocs for research. Again, this could happen at a low level. Departments could hire adjuncts to teach undergraduate classes left unstaffed by smaller graduate cohorts, and faculty who are writing grants to hire staff for their labs would realize that postdocs cost less than graduate students.

The glib response to criticism of taxing tuition waivers is that if the tuition-waiver tax passes, universities could simply eliminate graduate tuition and thereby also eliminate the tax their doctoral students would pay on tuition waivers. But simply eliminating tuition for Ph.D. students could be complicated by one aspect of schools’ current practice: Ph.D. programs often list but seldom charge tuition; professional-degree programs, on the other hand, really do depend on tuition; listing widely discrepant tuition between professional degrees and academic degrees would probably invite fines for tax evasion.

In practice, if the tuition-waiver tax passes, we will probably see schools adopt a model of listing high tuition for the first few years of graduate school and then low tuition for later years. In effect, this would mean tuition or tuition-waiver taxes for professional degrees and the coursework half of Ph.D. programs, and then nominal tuition, implying minimal waiver taxes, when students are no longer taking classes but are just working on their dissertations. Many top private universities (such as Harvard, Princeton, Duke, and Stanford) already list high graduate tuition for the first few years and then cut tuition by 75–90 percent for dissertation work. If the House plan to tax graduate tuition waivers passes, expect to see Yale, as well as most public universities, also adopt this tuition model. Charging low tuition to students working on their dissertation would probably pass muster with the IRS and would mitigate, but by no means eliminate, the effect of the tuition-waiver tax.

Another issue with simply making tuition free or nominal to Ph.D. students is that charging tuition lets you bill it to someone. Often these are just internal flows, which can get Byzantine, and it would be a painful process to watch university administrations unwind the current model of robbing Peter to pay Paul without overly enriching Peter or starving Paul.

A more serious issue is that third parties sometimes pay the tuition of doctoral students. At some universities, the tuition is covered by grants that fund the salaries of graduate students working in a faculty member’s lab, a practice that is common in STEM fields. Likewise the National Science Foundation pays the first $12,000 of tuition for those students to whom it awards fellowships. So universities that charge low graduate tuition to avoid the tuition-waiver tax would forgo a lot of grant revenue.

In theory, universities could have their cake and eat it too by having low tuition and charging higher indirect costs (overhead charges that universities charge to grants), but that works only if every crank and cog in the Rube Goldberg policy machine works just right. As we know from the example of Obamacare, this is highly unlikely.

While American K–12 education is lackluster, our universities are world-class, and our doctoral programs in particular dominate. Not only are American Ph.D.s the most prestigious, but they are a plurality of all Ph.D.s worldwide, at almost 30 percent of the total. About 40 percent of American Ph.D.s (and a majority in physical sciences and economics) are granted to foreigners; this is a source of high-skilled immigration if these students stay in the United States, and cultural influence with foreign elites if they go home. The dominance of American universities is a major reason that English became the lingua franca of science decades before it achieved that role in business. American dominance of doctoral education is all the more remarkable given that our doctoral programs are relative latecomers. American higher education dates back to the establishment of Harvard in 1636, but the American Ph.D. system is much younger, dating to three Yale doctorates in 1861 and especially to the establishment of Johns Hopkins in 1876. Prior to this, Americans who attained doctoral degrees mostly did so in Germany. If we are not careful, we could lose our dominance over graduate education, just as the Germans lost their dominance in the early 20th century to American and British universities that were aggressively expanding their research and graduate missions.

The intellectual center of gravity might shift to nations that are not anglophone liberal democracies. China is aggressively expanding its doctoral programs.

Currently, top American doctoral programs compete almost exclusively with one another for the best students. Should the tuition-waiver tax make it through reconciliation and into law, I will probably start hearing from top students whom my department is trying to recruit that our offer is not as good as the ones they have in hand from the University of Toronto, the University of British Columbia, or McGill. Over the space of a few years, we could see these foreign schools poaching top American faculty who want to work with the best graduate students. Canadian intellectual hegemony is not the worst thing I can imagine, but the intellectual center of gravity might shift to nations that are not anglophone liberal democracies. China is aggressively expanding its doctoral programs, but the quality of American Ph.D. programs ensure that they remain the current gold standard, and many faculty at Chinese universities have American doctorates. Regardless of the arguments for a tuition-waiver tax — and whether they’re based in economic calculations or motivated by sheer spite at the ideological skew of universities — it would be folly to sabotage a field that America so thoroughly dominates.

— Gabriel Rossman is an associate professor of sociology at the University of California, Los Angeles.

P alestinian president Mahmoud Abbas says he will no longer accept a role for the United States in the ongoing Arab–Israeli peace negotiations, which have produced little in the way of negotiation and nothing in the way of meaningful peace.

If President Abbas desires to end diplomatic relations with the United States, the United States should think seriously about obliging him.

When the Trump administration announced plans to comply with longstanding U.S. law and move — someday — the U.S. embassy in Israel to Jerusalem, the country’s capital, the Arabs went nuts, but not quite as nuts as the Turks would have liked: Turkish strongman Recep Tayyip Erdogan has criticized what he sees as a weak Arab response to the Trump administration’s non-initiative initiative. Turkey has its own game, and Palestinian upheaval would suit Erdogan just fine.

There were apocalyptic intimations, but in reality the response was more or less what one would expect, and if the Olympic committee ever recognizes rock-throwing as a legitimate sport, the Palestinian people will finally have found their national calling.

This is a familiar and tedious piece of performance art. The Palestinian statelet is in no way viable, and the Palestinian cause is less and less useful to the Islamic powers with each passing year. The Arab–Israeli conflict was for a time another Cold War proxy, with the Palestinian cause serving as a cat’s-paw for the Soviet Union, which meant that it was a source of real money and real power. Those days are long gone, and the Palestinian cause has in no small part devolved from instrument of civilizational conflict to instrument of ordinary grift, a phony jihad used to fortify the alliance between fanatics and financial interests that is the default model of government throughout much of the Muslim Middle East.

To keep this particular grift going, it is necessary that there be no settlement between Israel and the Palestinians and no meaningful progress toward it. That means that every little step toward resolution must be met with murder and terrorism — terrorism is in fact the main Palestinian mode of negotiation. The capital of Israel is in Jerusalem, and there is no serious proposal under which it is going to move to Tel Aviv or elsewhere. Even should East Jerusalem come to be generally recognized as the capital of the Palestinian state, such as it is, that is not going to change the fact that the west of the city is and long has been Israeli territory, and it hosts the Israeli capital. President Trump’s announcement did not change any of that, but it did represent a baby step in the general direction of resolution — and that is why it has been met with such hysteria, at least in circles of power in the Islamic world.

That’s the Palestinian way: Every step toward resolution, even small and largely symbolic ones, must be met with maximal opposition, up to and including political violence and terrorism. Whatever sympathy one may feel for the Palestinian people themselves, their leaders and the leaders of their allies are not good-faith negotiating partners and are not likely to become good-faith negotiating partners. It is difficult to negotiate a lasting peace when one side does not want peace at all.

That’s the Palestinian way: Every step toward resolution, even small and largely symbolic ones, must be met with maximal opposition, up to and including political violence and terrorism.

President Trump promises to unveil an Arab–Israeli peace plan sometime in the coming year. It is being worked up by his son-in-law, a real-estate developer whose political acumen is such that he was unable to figure out how to cast an absentee ballot in the most recent New York City mayoral election. (As the New York Daily News reported in its hilarious account, Kushner, Ivanka Trump, and the Third Lady all managed to botch their ballots to the point of nullifying them; the president himself may have invalidated his ballot, too, by getting his own birthday wrong on the paperwork. “Not since Jefferson dined alone,” and all that.)

Abbas boasts that the Palestinian state and the Palestinian National Authority no longer receive U.S. aid, but that isn’t quite true. The United States is a very large contributor to UNRWA, the relief agency for Palestinian “refugees.” (There aren’t any Palestinian refugees, really, but, unlike the rest of the world’s peoples, Palestinians inherit refugee status.) The United States is also a large contributor to other U.N. programs and international organizations that provide aid to the Palestinians, who, thanks to their incompetent and malevolent leadership, have no real economy to speak of. In 2016, the United States gave more in aid to the Palestinians than any other country did.

UNRWA is a troubled and troubling organization on its best day, an encourager and enabler of Palestinian radicalism. The prospects for peace probably would improve if it were dissolved. But, short of that, the United States should consider accommodating President Abbas’s demand and stepping away from the situation for a while, taking our aid money with us. If President Abbas must have his obstinacy and his cheap theatrics, then let him pay the full price for them. Let’s see how much loose change Erdogan can scramble up from the cushions of his ottoman. The haul is likely to be disappointing.

The United States has global interests, and one of those is seeing to the interests of our allies, including Israel. President Abbas thinks the United States has no role in future peace negotiations in the Middle East. One could not blame Americans for thinking much the same thing about him. What’s certain is that American power and American interests will be here when President Abbas has joined the footnotes, and the powers that be in the Islamic world would do well to meditate on that fact.

— Kevin D. Williamson is National Review ’s roving correspondent.

Editor’s Note: As part of the National Review Institute’s End-of-Year Appeal, NRI fellows are sharing words of wisdom and inspiration. Today, Jay Nordlinger discusses NRI’s programmatic virtues and shares his affection for and admiration of Bill Buckley. In that spirit, we encourage you to find out more about the Institute’s Buckley Legacy Project.

O ne of the most recurring phrases around here is “Buckley legacy.” What does it mean? I suppose it means different things to different people. WFB was big and multifaceted, and so is his “legacy.”

He loved life, for sure. Lived it with zest. His only fear, so far as I know, was that, for a second, he might be bored. He had a wide array of interests, from painting to sailing to gadgets. (He was one of the great gadget men of all time — a regular Inspector Gadget.) He had a wide array of friends: left, right, and center. Two of his best friends were named Galbraith: John Kenneth (on the left) and Van (on the right).

At his table, I met many liberals — including David Halberstam, Mario Cuomo, and Garry Trudeau. WFB engaged them. These days, people are apt to say, “Shut up” and “Go away.” WFB was more apt to say, “Come to dinner.”

He was world-minded, and so was his magazine. He stuffed it with reports and stories and opinions from all over. He also stuffed it with ballet, theater, “delectations,” etc.

WFB was devoted to high culture. And at his table were many, many artists. I thought of one of them the other night, for I reviewed her in recital.

This was Sharon Isbin. WFB said to her, “So, I understand you’re the best guitarist in the world.” She responded, “No, no. There is no ‘best guitarist in the world.’ That would be like saying there’s a best writer or something” — whereupon the great writer flashed his 1,000-watt smile and said, “Waal . . . ”

He got into the grubby day-to-day of politics, but he always kept high principles and high values. He was independent of party. He stood up to Left and Right. To do the latter was more painful, surely — it cost him more. (I could cite chapter and verse.) But he did it.

Everyone likes to “speak truth to power.” There is much glory in it. There’s a lot less glory in speaking truth to people. WFB did not shrink from it.

He was so confident in his views — in his ability to think and speak and write — that he accommodated other views. In an anniversary issue, he published an essay by a leftist, an essay by a liberal, and an essay by a conservative. Can you imagine such a thing? WFB did.

His TV show, Firing Line, featured a veritable human parade. He had people from all walks of life, with all sorts of views. He wanted to get from them what they knew. Everyone knows something — or many things. And WFB wanted to pull those things from them.

Everyone likes to ‘speak truth to power.’ There is much glory in it. There’s a lot less glory in speaking truth to people. WFB did not shrink from it.

I host a podcast called ”Q&A.” I, too, like to pull. In recent weeks, I have interviewed Angela Gheorghiu on opera, Mark Helprin on literature, and Ash Carter on defense. They know stuff, those people. Get from them what they know.

Also in recent weeks, I have been to Sweden, to look into their recent defense moves. (Putin’s Russia has concentrated their minds, and every other mind in the region.) I talked to several experienced and influential people, getting from them what they know, and passing it on to others.

I wrote about the 2017 Nobel Peace Prize, given to an anti-nuke group. That’s a big, indeed apocalyptic, issue: nukes. I did a story about a onetime cabinet official from Colombia, now seeking political asylum in the United States. I did a report from the Salzburg Festival, describing an assortment of performances.

Once, as I was leaving for Salzburg, WFB said, “Say hello to music for me!” Another writer, Vikram Seth, has written, “Music is dearer to me than speech.” I think the same was true of WFB.

I went to Dallas to talk about him. The audience was a group of “regional fellows” of the National Review Institute. It was a pleasure to talk about him, and to meet them. I wrote a book on a strange and interesting topic: the sons and daughters of dictators. Thanks to NRI, I was able to go around the country talking about it, and making points about freedom and unfreedom. The best part of these events, almost invariably, was the Q&A.

As you might guess, I’m grateful to work for National Review and the National Review Institute. We are now raising money (natch). We like to say, “We’ve never had a sugar daddy, like George Soros.” The truth is — and more to the point — we’ve never had a sugar daddy of the Right. We have survived, and sometimes flourished, thanks to the donations of people who appreciate the enterprise and give what they can.

You can do this here. And, if you like, leave a comment, saying why you’re donating.

The question sometimes comes up, Why has National Review never had a sugar daddy? What’s the problem, what’s the hold-up? I’m not entirely sure. But I can tell you that a lot of people like a publication, or a group, or some other entity, that will toe a line. WFB was never much of a line-toer. And this meant that a lot of people parted ways with him. Which he bore.

He was a great nurturer of young people — young writers — and so is NRI. That’s one thing that your donations do. A very important thing. (To check out the range of NRI activities and purposes, go here.)

I can tell you that it is very, very encouraging to be around our young writers today. They are confident conservatives, for sure — but they are also allergic to charlatanism and mere partisanship. WFB would have loved them, as I do, and as you would (and perhaps do already).

Once, WFB twitted LBJ for saying — in a State of the Union address, I believe — “the future lies ahead of us.” When doesn’t it? Well, it may be equally twittable to say, “Young people are our future.” But there’s something to it.

Anyway, enough from me. I know there are thousands of causes in the world, and thousands of good causes. The National Review Institute is one. But it is one that makes a dent — a righteous dent — and thank you, thank you, for giving what you can. One of WFB’s many books is called “Gratitude.” I know the feeling.

P.S. This may be bragging, but what the heck (I’ve done worse). As a rule, I don’t ask others to give to what I myself don’t give to. Well, I am a donor to NRI — and, indeed, a member of the 1955 Society.

— Jay Nordlinger is a senior editor of National Review and a book fellow at the National Review Institute.

R ecently, the Washington Post offered readers a “peek into a world after a massive tax cut” — a visit to Alamance County, in my home state of North Carolina, where reporter Todd C. Frankel rode along with factory owner Eric Henry.

Henry, whose T-shirt-manufacturing company “almost went belly-up in the in the mid-1990s,” said he’s been doing well in recent years and his business is growing. This summer, he had his best production month ever and gave a bonus to his employees. But, he said, he didn’t know people who benefit from North Carolina’s tax cuts.

I’d say he should look in the mirror. The truth, somewhat obscured by the article’s anecdotes, is that millions of North Carolinians like Eric Henry and his workers have steady jobs and live in a more prosperous economy because of North Carolina’s tax cuts.

Just five short years ago, our economy was floundering and unemployment hovered around 10 percent. Since then, we’ve added 245,000 people to our labor force and the unemployment rate has been slashed almost in half. That’s a whole lot of folks who are better off now than they were then. And there’s no doubt that their improved fortunes are attributable to our tax cuts.

Since 2013, our corporate income-tax rate has dropped from 6.9 percent to just 3 percent — the lowest rate nationwide in states with a corporate tax. It will fall again to 2.5 percent in two years. Our personal income-tax rate, which was close to 8 percent, is now 5.5 percent and will drop to 5.25 percent in 2019, delivering $2.8 billion in tax relief to North Carolinians over the next five years. And the standard deduction, which has already doubled, will have more than tripled by 2019.

Because of these reforms, our state made the “most dramatic improvement” in the history of the Tax Foundation’s Business Tax Climate Index, jumping from No. 41 to No. 11 in just one year.

When North Carolina first embarked on its tax-cutting effort, naysayers warned that revenues would plummet and our state would face a budget crisis. Among them was Alexandra Sirota, director of the left-leaning North Carolina Budget and Tax Center, who predicted in 2013 that the cuts would “weaken North Carolina’s tax system and broader economy.” She also said they would “[jeopardize] our future by undermining the long-term ability of our state to maintain the building blocks of a strong economy.”

Happily, Ms. Sirota and others have been proven wrong.

Not only is business booming, but our state budget is in great shape as well.

In 2013, the year the tax cuts were passed, the American Legislative Exchange Council ranked North Carolina’s economic outlook No. 22 in the country. Today, we’re No. 3. Forbes says the Tar Heel state is the best in the country for business. And we were named the most competitive state in the nation by Site Selection magazine again this year, after tying with Texas last year and winning the award outright in 2015.

Not only is business booming, but our state budget is in great shape as well. Since the tax cuts passed, we’ve repeatedly experienced revenue and budget surpluses. Today’s rainy-day fund is at a record $1.8 billion and lawmakers recently boosted performance pay for North Carolina teachers.

That’s because the broad tax cuts were coupled with rollbacks in corporate-welfare giveaways and measures designed to restrain the growth of spending spending. Overall, spending will increase just 3 percent this fiscal year, which is below the 3.8 percent combined growth of inflation and population.

In other words, North Carolina legislators showed the nation how to successfully implement tax cuts that grow the economy without destroying the state budget. This is a sharp contrast to everyone’s favorite tax-cut boogeyman, Kansas. There, after the passage of dramatic tax cuts in 2012, total state spending increased almost every year. Between the budget years of 2010 and 2018, Kansas lawmakers increased expenditures by almost 25 percent, from $5.3 billion to $6.6 billion. And even in the face of declining revenues, the legislature failed to rein in spending. You don’t have to be an economist to understand how this would lead to a budget crisis.

It’s a shame, because the ingredients for economic success were there. The year Kansas enacted its tax cuts, more than 15,000 small businesses opened, the most the state had ever seen in such a short time. The following year, the state’s unemployment rate dropped from 5.5 percent to 4.9 percent. Over the same time period, its economic outlook jumped from 26th to eleventh in the ALEC rankings.

The Post’s article may have given readers “a peek” at our state. But a peek, by definition, isn’t a full picture. North Carolina’s economy is growing, prosperity and opportunity are expanding, and lives are getting better across the state. That’s what I see every day. It’s my hope that Washington will follow North Carolina’s blueprint, not Kansas’s folly, and couple historic tax cuts with efforts to restrain spending.

— Donald Bryson is the North Carolina state director of Americans for Prosperity.

A lmost every supposedly informed prediction about President Donald Trump’s compulsive Twitter addiction has so far proved wrong.

He did not tweet his way out of the Republican nomination. Spontaneous social-media messaging did not lose Trump the general election race with Hillary Clinton. Nor has Trump tweeted his presidency into oblivion.

Instead, Trump’s tweets have not just bypassed the mostly progressive media; they’ve sent it into a tizzy. In near-suicidal fashion, networks such as CNN have melted down in hatred of Trump, goaded on by Trump’s Twitter digs.

Trump has often bragged that having a large following on social media — he has more than 44 million Twitter followers and connects with millions more via Facebook and Instagram — is “like having your own newspaper.”

While the media goes ballistic over Trump’s inflammatory Twitter attacks on “fake news,” the vast majority of Trump’s electronic messaging simply reports on his daily activities and various agendas.

Trump has created a Twitter empire with a reach that far exceeds the combined subscriber base of the New York Times and Washington Post, and he has vastly expanded on Barack Obama’s use of a presidential Twitter feed to connect with voters.

Almost all of the people who have climbed into the Twitter ring with Trump — from Hillary Clinton to the take-a-knee millionaire National Football League players — have come out on the losing end. Trump has proven far better than seasoned journalists and ego-driven celebrities at creating go-for-the-jugular put-downs of 240 characters or less.

Trump’s stream-of-conscious Twitter observations have sometimes proved eerily prescient. He tweeted warnings about the dangers of illegal immigration shortly before the tragic murder of Kate Steinle by an undocumented immigrant with a lengthy criminal record. Soon after Trump retweeted incendiary and controversial videos about radical Islamic violence, earning him condemnation from British prime minister Theresa May, it was announced that two men had been arrested in London for plotting a terrorist attack and assassination — of none other than May herself.

Why, then, should Trump ever consider pruning back his controversial tweets or confining them to the reportage of his daily achievements, in the manner of every other mostly boring politician?

Because personal dueling with journalists, celebrities, and politicians is not only becoming superfluous, but it is now distracting Trump’s audiences from a growing record of achievement.

Nine months ago, critics left and right were writing off Trump as an irrelevant buffoon without a clue of what to do in the White House. They predicted perennial sloth and inaction.

Not now. Trump’s Cabinet and judicial appointments, executive orders, and deregulation measures are systematically overturning the progressive Obama project.

Abroad, the Trump national-security team has recalibrated U.S. foreign policy from an apologetic recessional to engaged, principled realism.

Republican politicians once grumbled about the utopian Paris climate accord but never thought of doing much about it. Trump, like him or hate him, summarily withdrew America from the agreement — and shrugged off the ensuing green outrage.

Members of Congress occasionally expressed support for the recognition of Jerusalem as the capital of Israel — but on the expectation that no barnstorming candidate would ever dare to officially recognize Jerusalem as such if elected president.

Prior to 2017, conventional economic wisdom dictated that the Dow Jones industrial average would not soon climb above 22,000, that the unemployment rate in peacetime could not fall to 4 percent, and that the GDP could not grow at an annual rate of 3 percent. All those shibboleths have either been blown up or may yet be blown up in 2018.

Trump is no longer written off by the Left as a sleepy dud. Instead, he suddenly is being redefined by many of his progressive enemies as a dangerous workaholic and right-wing revolutionary.

Never-Trump Republicans no longer insist that Trump is a liberal Manhattan wolf in conservative sheep’s clothing. Grudgingly, they now confess that he is ramming through a conservative agenda not seen since the days of their heartthrob, Ronald Reagan.

In other words, Trump’s record speaks louder than his tweets and now transcends his electronic spats.

So why should Trump still care what a minor journalist tweets about him to get much-needed attention? Why does the president need to keep pounding increasingly irrelevant former FBI director James Comey, who has been reduced to tweeting anti-Trump slogans?

Trump’s record has now transcended his Twitter ankle-biters, who have become ever shriller in seeking attention in the form of electronic counter-put-downs.

In sum, Trump has outgrown the Twitter wars. He should now just declare victory, retire as Twitter champ, hang up his tweeting gloves, and leave the slap-down ring for others.

Y es, it’s great that Doug Jones was able to defeat alleged pedophile Roy Moore on Tuesday — but anyone who actually thinks that this represents some kind of major cultural shift is going to be sorely disappointed.

Far too many people seem to believe that Jones’s election represents some kind of decline in Trump loyalism. A piece in the Washington Post declared that “Trumpism bottomed out in the Moore candidacy.” The New York Times counted it among examples of “stark repudiations of a first-term president, foreshadowing a larger repudiation soon to come.” The Irish Times asked: “Could this be the beginning of the end for the Trump presidency?”

It’s not that I don’t see their argument. President Trump, after all, did instruct his supporters to vote for Roy Moore, and only about 650,000 of them did so. Compare this to the fact that Trump received more than 1.3 million Alabama votes to Hillary Clinton’s fewer than 730,000 in the general election, and it seems as though support for him could be waning.

Yes — in any other circumstances, this could be conceivable. But the truth is, these are not normal circumstances: Roy Moore was facing credible accusations of child molestation, making him arguably far worse than any candidate we’ve seen, and almost certainly worse than any candidate we’ll ever see again. Unless another political party actually nominates an accused pedophile again, nothing about this election’s outcome can really indicate anything about the elections or political landscape of the future.

More than half of voters, after all, said that they believed the allegations, and 60 percent said that those allegations had influenced their votes. Roy Moore’s loss doesn’t mean that President Trump’s supporters have abandoned him or his agenda. It just means that, for some, voting for someone they have good reason to believe has sexually abused underage girls is a line that they won’t cross.

The key word there, by the way, is “some.” Yes, some Alabamians didn’t vote for Moore, but many of them did. This was a very, very close race, with Doug Jones winning by only 1.5 points over Moore, even though Moore was an accused pedophile. Really, is that something to be celebrating? Think about it: Almost half of voters either refused to believe multiple women’s very credible stories of abuse, or they said to themselves, “Well, I may disagree with Roy Moore being an accused pedophile, but I disagree with Doug Jones being a Democrat,” and they decided that being a Democrat was worse. Both of these options are truly disheartening — and any time I see someone call Jones’s win “stunning,” I can’t help but think that what’s truly “stunning” is the fact that it’s true.

I’d love to think that this election somehow proves that this nightmare of a political landscape that we’ve been living in is somehow coming to an end — but I highly doubt that this is the case. If anything, I think the fact that this race was such a nail-biter reveals our ever-growing tribalism, and a continued hesitation to believe women who come forward with stories about sexual abuse. I hope that I’m wrong.

— Katherine Timpf is a reporter for National Review Online .

R epublican politics was starting to feel like a version of Mel Brooks’s The Producers. In the play, two scammers devise a tax write-off scheme in which they will make a killing by losing money on a Broadway show. They reach for the most grotesque, tasteless musical the human mind can conceive – Springtime for Hitler – and are undone when it’s a surprise hit.

Roy Moore could have sprung from the imaginations of Democratic operatives hoping to find the embodiment of every stereotype liberals cherish about conservatives. Ignorance? In a July radio interview, the anti-immigration hardliner couldn’t say who the Dreamers are or what DACA refers to. He did not know that the U.S. Constitution, which he purports to revere, forbids religious tests for public office. In the Republic of Moore, Muslims would be barred from serving their country.

Conspiracy monger? He trafficked in the birtherism about Barack Obama and suggested that parts of the Midwest are ruled by sharia law.

Anti-gay? Moore is not just a traditionalist who opposes same-sex marriage, he wants to put homosexuals in prison and claims that the U.S. is the focus of evil in the modern world for permitting gays to marry.

Irresponsible? Moore was twice removed from office for failing to obey the law.

Anti-Semitic? When your wife defends against the charge by protesting that “one of our lawyers is a Jew,” it’s not a good sign.

Racist? Anti-woman? Here’s where the Moore show veered into wild satire territory, or would have if we hadn’t actually seen it unfold. Moore said he agreed with Trump about making America great again. When, exactly, a voter asked, was America at its greatest? “I think it was great at a time when families were united, even though we had slavery, they cared for one another” said the dolt Steve Bannon chose as the kind of Republican who would stick it to Mitch McConnell and the establishment. Remember how we all spat out our coffee when Joe Biden accused Republicans of wanting to put black folks “back in chains”?

As for women, Moore was the Democrats’ jackpot — a supposedly religious conservative flamboyantly fulminating against immorality who was himself a child molester. You could not write this as fiction because it’s too incredible.

In the aftermath of Doug Jones’s victory, many Republicans are saying they “dodged a bullet” because Democrats would have used “Senator” Moore to discredit the entire Republican party.

Their relief is understandable but premature. Though the morning-after commentary has tended to focus on Steve Bannon’s noxious role, the Moore candidacy was not his responsibility alone. A number of key Republicans – Richard Shelby, Mitch McConnell, Mitt Romney, Mike Lee, Cory Gardner, and others — treated Moore as radioactive, but an amazing percentage were willing to say that a sleazy bigot was fine as long as he would vote for the president’s agenda. Prominent “family values” conservatives like James Dobson, Tim Wildmon, and the infinitely flexible Jerry Falwell Jr. stood by their endorsements of Moore. Sean Hannity issued what seemed to be an ultimatum to Moore to give an account of himself regarding the teens he dated/molested, but then, Obamalike, backed away from his own red line. He said the people of Alabama would judge (as they certainly did, but not in the way Hannity was presumably hoping). Other Fox News hosts returned to the Clinton well again and again, implying that if Bill Clinton hadn’t been held responsible for Juanita Broaddrick, well then . . .

And of course, Moore’s most crucial booster was Donald Trump, someone with more than a passing interest in the “he denies it all” defense. The Republican party has not dodged him, and cannot. You can scan the exit polls of the post 2016 elections so far and draw a scary 2018 picture for Republicans. African Americans, who weren’t motivated to turn out in off-year elections even when President Obama implored them to, showed up in force in Alabama. Suburban educated voters – the key to Republican general-election victories – have turned against the party in formerly swing-state Virginia, and even in reddest Alabama.

The Republican party has voluntarily donned a fright mask that the hapless Democrats and the evil mainstream media could never have pinned on them. It is probably too late to avert the reckoning that is coming, but even if only as a gesture of civic hygiene, individual Republicans might wish to make clear that the Molotov-cocktail politics that Trump brought to the Oval Office is not what they represent.

I n this season of frenzied liberal assault on the incumbent president and the almost uniform view of the long-standing bipartisan political elite of the United States that President Trump is a maniac, any falsehood about him or act of obstruction is justified in damaging his presidency, impairing his ability to govern, and bringing forward the swiftest possible return of the status quo ante-Trump. It is now routine for the principal outlets of media mythmaking to invoke the legacy of Richard Nixon confected by his accusers of long ago. The particular myth that has for several years been the preferred falsehood to resurrect and hurl at Mr. Nixon as if it were a law of Archimedes is that he sabotaged the Vietnam peace talks when he was a presidential candidate in 1968. Journeyman liberal historian Robert Dallek in the preface to his recent biography of Franklin D. Roosevelt stated in passing as indisputable fact unworthy of elaboration that Richard Nixon had violated the Logan Act of 1799 at the end of the 1968 presidential election campaign when he secretly advised the government of South Vietnam to abstain from cooperating with President Johnson’s peace initiative.

On December 4, the New York Times — in an editorial unequivocally stating that President Trump’s advisers had violated the Logan Act by undermining the foreign policy of the sitting U.S. president in his contacts with the Russians — invoked this left-wing truism about Nixon as if citing a clause of the Constitution that had not been challenged for centuries. It stated: “Richard Nixon once again proves useful. In the closing days of the 1968 presidential campaign Mr. Nixon ordered H. R. Haldeman, later his chief of staff, to throw a ‘monkey wrench’ into the Vietnamese peace talks, knowing that a serious move to end the war would hurt his electoral prospects. Mr. Nixon denied that he did this to the grave: Mr. Haldeman’s notes, discovered after his death, revealed the truth.” On examination, the Haldeman notes read: “re VN bomb halt news: Harlow-have Dirksen and Tower blast this. Dirksen call LBJ and brace him with this — any other way to monkey wrench it? Anything RN can do.” (Bryce Harlow was an adviser and Everett Dirksen and John Tower were Republican senators.)

There are indeed parallels between left-wing media treatment of President Trump and President Nixon. The fragmentary notes cited by the Times no more constitute proof of Nixon’s engaging in illegal activities than President Trump’s counsel’s writing a tweet (a tweet that could be read as indicating that the president might have known that General Flynn misinformed the Justice Department as he had misinformed the vice president about contacts with the Russians when he fired Flynn) is substantial proof of the president’s obstruction of justice and therefore of his impeachability. The wish is father to the thought. These liberal hysterics, in their demented fury against the elected leader of the country, in the case of Nixon and Trump, leap like gazelles from innocuous or ambiguous asides to an instant convention of certain proof of criminal wrongdoing; they imagine they build their feeble arguments by citing historical precedents flimsily constructed from the same whole cloth of their malicious imaginations.

In fact, in the tumultuous election of 1968, the real skullduggery was President Johnson’s claim of a (completely fictitious) breakthrough in the peace talks in Paris a week before the election in order to generate, as subsequent history proved to be the case, a totally unjustified sense of optimism that peace might be near on an acceptable basis. It was an absolute falsehood from A to Z. There had been no breakthrough and yet Mr. Johnson announced that the talks should resume because of a positive response from Hanoi and that the South Vietnamese government and the Vietcong would be “invited” to attend.

There is not one shred of evidence that Mr. Nixon or anyone acting for him had any direct contact at all with the government of South Vietnam at this time. The associate chairwoman of the Republican women’s campaign that year, Mrs. Anna Chennault (widow of World War II Air Force general Claire Chennault of the “Flying Tigers” in China), was a friend of the president of South Vietnam (Nguyen Van Thieu) and visited him occasionally. But there is no evidence whatsoever that she transmitted a message from Nixon or anyone else that constituted an attempt by her, acting for anyone, to conduct or influence the foreign policy of the United States. This is merely another in an apparently endless sequence of outrageous defamatory falsehoods of decaying Nixon-haters, throwing muck at his imperishable memory like people conducting a mockery of abstract art by throwing blobs of paint at a distant canvas. The South Vietnamese president needed no advice from anyone on the matter of which American presidential candidate he was likely to find more congenial. There is certainly a parallel between the diluvian imputation of discreditable motives and actions to Mr. Nixon, who was in fact one of the most successful presidents in American history, and the mindless portrayal of Donald Trump as a feckless monster slouching off every day to bring America into more profound perfidy and the world closer to destruction.

Specialists in aberrant mass psychology will one day derive great interest and perhaps generate much enlightenment on what it is that has possessed a wide swath of highly intelligent and generally civic-minded Americans to lose their minds on the subject of President Trump. The hatred of Nixon was a little more comprehensible because of the role he played in generating support for the Cold War and resistance to Soviet encroachments and some of the gratuitous and nasty things that he said and did in those efforts, including some reflections on President Truman, Secretary of State Acheson, Congressman Voorhis, and the unfortunate darling of the contemporary left, former Soviet spy Alger Hiss.

CNN’s lies about Trump have been so numerous and egregious and so widely recirculated, they now constitute one of the most disgraceful chapters in the modern history of the U.S. national media.

It is ironic that Donald Trump’s most fanatical critics in the media and among the Democrats have ever more frequent recourse to precisely the liberties with the truth that they find so irritating in him. The immensely representative congressman of the world capital of opinionated stupidity, Hollywood, Congressman Adam Schiff, has for over a year constantly incited the inference that the president has committed treason, and when even parrots of this view and lapdogs of the Trumpophobic left such as CNN’s Jake “drip, drip, drip“ Tapper (the quote refers to Tapper’s perception of accumulating revelations of Trump-Russian collusion, of which there have been none) ask Schiff to elaborate, he swaddles himself in unctuous references to his duty as a member of the House Intelligence Committee to be discreet. CNN’s lies about Trump have been so numerous and egregious and so widely recirculated, they now constitute one of the most disgraceful chapters in the modern history of the U.S. national media.

This charade will not end in the premature departure of Trump from the presidency because nobody near him has done anything inappropriate. Comparisons with Nixon are unfounded except that there remains a very inconclusive amount of evidence that Nixon himself committed illegalities, though some around him certainly did. The best possible outcome is that in patiently waiting for the shabbily conducted Mueller investigation to clear him completely, Trump will not only be vindicated; his enemies will sustain such a bone-crushing humiliation that this practice of trying bloodlessly to assassinate presidents will end. In that atmosphere, Richard Nixon will receive the fair upward revision of collective historical opinion that he richly earned.

Hi. You don’t know me. But like many others, I feel as if I know you, after reading the crushing short story about you that went viral after appearing in The New Yorker.

The story described how, during your sophomore year in college, you met a man named Robert when you were working in a movie theater, exchanged some funny and flirtatious texts with him, then took a study break to meet him for a snack at a 7-Eleven, which led to an awkward date and even more awkward sex. It’s evident from the tone of hurt, humiliation, and sorrow in your words that this was one of the most miserable experiences you’ve ever had in your 20 years.

Shortly after your thoughts appeared, the Internet teemed with sympathy for you and disgust with Robert, a bearded, paunchy 34-year-old who, during your nauseating single tryst, threw you around in bed “as if you were in a porno.” Many pointed out that, although the sex was consensual, it was not exactly enthusiastic on your part. The author who created you, Kristen Roupenian, tells us what went through your mind as you watched Robert hurriedly pull down his pants before he realized his shoes were still on:

Looking at him like that, so awkwardly bent, his belly thick and soft and covered with hair, Margot recoiled. But the thought of what it would take to stop what she had set in motion was overwhelming; it would require an amount of tact and gentleness that she felt was impossible to summon. It wasn’t that she was scared he would try to force her to do something against her will but that insisting that they stop now, after everything she’d done to push this forward, would make her seem spoiled and capricious, as if she’d ordered something at a restaurant and then, once the food arrived, had changed her mind and sent it back.

I’m sorry about what happened to you, Margot. But I don’t think you have thought through how you got into a terrible situation. In all of the responses that people — mostly young women like you — have written about your experiences, few have mentioned the two words in your story that jumped out at me: “seven” and “three.”

Robert is your seventh sexual partner. You’re 20 years old. Margot, I don’t know what the right number is for you, but seven is too many.

Having sex with sketchy guys you don’t actually know after (by a generous estimation) 1.5 dates is a bad idea.

Please don’t mistake my concern for “slut-shaming.” I don’t think you’re a bad or immoral person. I won’t make the case that God is angry with you for not guarding your virginity until marriage. I won’t make the case that you should have sex with only the man you will eventually marry. But having sex with sketchy guys you don’t actually know after (by a generous estimation) 1.5 dates is a bad idea. When you were in that bedroom with Robert and he began taking off his pants with his shoes still on and you realized you were revolted, you had cornered yourself. You had left yourself with no good options. As you say, calling off the sex at that moment would have been somewhat painful. Going ahead with it turned out to be even worse. It’s evident that this hookup is going to bother you for a long time.

But you so easily could have avoided it. I’m from Gen X, two generations older than you, and I can tell you that, not that long ago, seven sex partners might have been considered a fairly robust tally for a lifetime. But for a 20-year-old? I know guys from college who married the third or second or even first girl they ever slept with. Needless to say, going back to a generation before me, seven sex partners in a lifetime would have been considered a startling number.

Margot, sex isn’t just a fun leisure activity. Your generation has been taught not to take it seriously. Yet sex takes you seriously. It’s obvious from your words that the night you spent with Robert has shaken you deeply. Whether you want to admit it or not, your feelings get dragged into it. Your personality. Your core.

Much of the Internet’s response to your sorrow has been, “Why can’t guys be better at sex?” That’s missing the point. Bad sex doesn’t need to be soul-crushing. If you had really forged a meaningful connection with Robert, you could have worked out your problems in bed over time. You could have made it clear that you didn’t like being treated like a porn star. You could have taught him what you like in bed.

Another popular Internet response has been, “It’s unfortunate that society makes it so that Margot felt she couldn’t call it off at the last minute.” But that’s missing the point too, because things had gone badly astray long before that. When you first got in Robert’s car, you wondered if he was going to rape and murder you.

If you’re in a car with a guy and you’re not sure if he wants to murder you, the date has already gone bad.

Margot, I can’t believe I need to tell you this: If you’re in a car with a guy and you’re not sure if he wants to murder you, the date has already gone bad. The underlying problem is that you don’t know this man. Except for selling him Red Vines a couple of times at the movie theater and meeting him at 7-Eleven for that snack, you’ve never even talked to him before this night. Texting is not a way to get to know someone. I understand why your generation loves texting: because you have time to formulate the perfect response. You get to present a better version of yourself than you really are in the moment.

But guess what? Guys get to do that, too. Guys can make themselves look better than they really are. Texting-Robert is cool and funny. In-person Robert is so weird and awkward that you can’t be sure he doesn’t plan to slit your throat.

The way you deal with this nervousness brings me to the other word that jumped out at me: “three.” You have three beers (plus a slug of whiskey) with Robert, which impairs your judgment so badly that you signal to him that you want to sleep together. The drinking is another bad idea. Depending on your size, three beers for you might equal six beers for a man. Is anyone proud of anything he’s done after six beers? The drinking you two do happens right after a movie, with no dinner in between, which means you had those three beers on an empty stomach. You don’t offer any details about the beer, but bars these days often serve beers in pint glasses, and not just pint glasses but 20-ounce pint glasses. Three of these would be 60 ounces of beer, which is really five beers. Which is really ten beers.

Margot, having three beers with a guy you barely know is a big, big part of why you ended up having one of the worst experiences of your life. I know your generation has been taught that a girl can do anything a guy can do. But you can’t drink like a guy. If you hadn’t gotten drunk with Robert, the evening might not have turned into a catastrophe for you. Drink sparingly when you’re in situations that could turn dicey. If you can’t drink sparingly, don’t drink at all.

You’re only a fictional character, Margot, but at the same time, you’re not. Young women are responding to your tale by saying that much the same thing happened to them. You and the young women who see them themselves in you should realize that your problem is not that so many guys are bad at dating or bad at sex (though we often are). Heed the lesson the world learned from Duke PowerPoint Girl: Getting drunk so you can have meaningless, unattached, random sex with guys you barely know is not going to make you happy.

— Kyle Smith is National Review ’s critic-at-large.

T he first time ended badly, so when, 156 years later, Alabamians were incited to again try secession, this time from the national consensus that America is a pretty nice place, they said: No. No, that is, to rubbish like this:

Interviewer : “[Ronald Reagan] said that Russia was the focus of evil in the modern world.”

Roy Moore : “You could say that very well about America, couldn’t you?”

Moore : “Well, we promote a lot of bad things, you know.”

Interviewer : “That’s the very argument that Vladimir Putin makes.”

Moore : “Well, then, maybe Putin is right, maybe he’s more akin to me than I know.”

In April, Alabama’s Republican governor, Robert Bentley, resigned one step ahead of impeachment proceedings arising from his consensual affair with an adult woman. Eight months later, Alabamians spurned presidential pleas that they send to the U.S. Senate a man credibly accused of child molestation. But the dispiriting truth is this: Behavior that reportedly got Moore banned from the Gadsden, Ala., mall was, for most Alabama Republicans, not a sufficient reason to deny him a desk in the U.S Capitol.

Although the president is not invariably a stickler for precision when bandying factoids, he said the Everest of evidence against Moore did not rise to his standards of persuasiveness. This fleeting swerve into fastidiousness about facts came hard on the heels of his retweeting of a video of a Muslim immigrant in the Netherlands beating a young man holding crutches. Except the villain was born and raised in the Netherlands. Undaunted, Trump’s remarkably pliant spokesperson, Sarah Huckabee Sanders, defended her employer from the nitpickers: What matters, she said, is not that the video is unreal but that “the threat” (of turbulent Dutchmen?) is real.

Moore was such a comprehensive caricature — Sinclair Lewis could not have imagined this Elmer Gantry — that the acid rain of reports about his sexual predations, and his dissembling about them, almost benefited him by distracting attention from: the remunerative use he made of a “charitable” foundation. And his actions as a public official that by themselves sufficed to disqualify him from any public office. He is an anti-constitutional recidivist, twice removed from Alabama’s highest court for his theocratic insistence that his religious convictions take precedence over U.S. Supreme Court decisions, so he could not have sincerely sworn to “support and defend the Constitution” and to “bear true faith and allegiance to the same.”

When reports of Al Franken’s misbehaviors against adult women surfaced, the National Republican Congressional Committee pounced: “Democrats who took Senator Franken’s campaign money need to . . . return his donations.” (Combined, they totaled $15,500.) When, 18 days later, Trump endorsed Moore, the Republican National Committee immediately sent $170,000 to Alabama. If the RNC, which accurately represents the president’s portion of the party, did not have situational ethics it would have none.

Moore has been useful as a scythe slicing through some tall stalks of pretentiousness: The self-described “values voters” and “Evangelicals” of pious vanity who have embraced Trump and his Alabama echo have some repenting to do before trying to reclaim their role as arbiters of Republican, and American, righteousness. We have, alas, not heard the last from them, but henceforth the first reaction to their “witness” should be resounding guffaws.

Elation is in order because a gross national embarrassment has been narrowly avoided. But curb your enthusiasm because nationally, as in Alabama, most Republicans still support the president who supported the credibly accused child molester. Alabama, however, has perhaps initiated the inevitable sorting of Republicans who retain a capacity for disgust from the Vichy Republicans who have none. After the president’s full-throated support of the grotesque, he should be icily shunned by all but his diehard collaborators. For example: When the president stages a signing ceremony for the tax legislation, no etiquette requires any Republican to be photographed grinning over his shoulder. Stay away.

By basking in the president’s approval, Moore became a clarifier. Henry Adams, great-grandson of the second president and grandson of the sixth, was unfair to the 18th when he wrote, “The progress of evolution from President Washington to President Grant, was alone evidence enough to upset Darwin.” By joining Steve Bannon’s buffoonery on Moore’s behalf, the 45th president planted an exclamation point punctuating a year of hitherto unplumbed presidential depths. He completed his remarkably swift — it has taken less than eleven months — rescue of the 17th, Andrew Johnson, from the ignominy of ranking as the nation’s worst president.

I was born in Opelika, Ala., in 1969. My parents were students at Auburn University, and after they graduated I spent my entire childhood in the South, moving from Alabama to Louisiana to Tennessee to Kentucky. I had the privilege of growing up in a region that was in the midst of one of the most remarkable and positive social and and cultural transformations in American history. I grew up in the New South.

Consider, for a moment, the pace of change. Just a few years before I was born, white men and women rioted to keep black Americans out of schools and colleges. The stench of institutionalized racism pervaded the region, oppressing American citizens and leaving the South a region apart. It wasn’t the Confederacy, but it wasn’t like the rest of America. Nowhere was American sin more pervasive or obvious.

But then, the change happened. Propelled by a courageous civil-rights movement, reinforced in conscience-stricken churches, and super-charged by an ambitious business community, the New South emerged, and it emerged in a uniquely southern way. It was as if an entire region began to cling to what was good, and reject what was evil.

The transformation was obvious. African Americans, who once sought “the warmth of other suns” by migrating in mass numbers to northern cities, voted once again with their feet and started to return to the South. Earlier this month, The Root’s Michael Harriet asked whether the South was “more racist” than other parts of the nation. His findings? The educational-achievement gap between black and white students is lower in the South, school segregation is less pronounced, the employment gap is lower, the differential in black/white incarceration rates is lower, there are disproportionately more black elected officials, and there the black/white economic gap is smaller. Harriet’s ultimate conclusion is important:

Based on all of the objective evidence, African Americans in the South are closer to whites economically and politically and in education and employment. The opportunities aren’t equal, but there is less of a measurable racial divide in the Southern states than there is nationwide.

The New South at its best is still distinctively southern. It retains its disproportionate commitment to military service, its deep faith, and its skepticism of classism and elitism. At the same time, it rejects the racist fundamentalism of churches past and strives mightily to improve educational systems held back by generations of indifference and disregard.

Let’s not forget, this is a region that segregated its schools, restaurants, and hotels and blocked black Americans’ access to the ballot box just a few decades ago. The magnitude of the change it’s experienced in such a short time is astounding.

Yesterday, the New South’s Democrats turned out, and the New South’s Republicans went on strike. The numbers are simply staggering.

I’m not arguing that the New South is utopian. I’m not arguing that the New South has been magically cleansed of injustice and racism. Far from it. It’s still a region full of fallen, sinful men and women. There is still racism here. There is still injustice. The transformation hasn’t been painless or easy, and it’s far from complete. From its inception the New South has been waging a cultural, political, and religious struggle against the Old South. Old habits die hard, even as in countless conversations in countless communities, men and women have engaged their neighbors, opposed the old bigotries, and done the hard work of remaking and reshaping an entire region’s mores and values.

Roy Moore was the zombie manifestation of the Old South: resentful, nostalgic for the days of slavery, openly bigoted. His supporters and apologists were walking, talking caricatures of the region’s very worst. If your average southerner saw a network show portraying southerners like Moore and his carnival of followers, they’d be appalled. They’d believe that it was evidence of anti-southern bias and elite media ignorance. Yet here we were, yesterday, watching this pathetic display:

Roy Moore campaign spokesman responds with silence when asked if he knew people can be sworn in with a text other than the Christian bible

So, when I looked at Alabama’s special election, I saw something far more than a mere political contest. I saw a cultural moment. I saw the gasps of a dying way of life that was seeking to resuscitate itself through the twisted fury of partisan politics. I saw local bigots hitching their wagons to national bigots — men like alt-right champion Steve Bannon — in a kind of Old South Pickett’s Charge.

Like Pickett, they failed. They lost a race that was almost impossible to lose.

Yesterday, the New South’s Democrats turned out, and the New South’s Republicans went on strike. The numbers are simply staggering. Democrats — especially black voters — showed up at the polls to support their own at a rate usually only seen in presidential elections: Doug Jones got more than 93 percent of Hillary Clinton’s vote total. Meanwhile, the GOP grassroots delivered their own message. In 2016, Donald Trump received more than 1.3 million Republican votes in Alabama. Roy Moore won less than half of that total. In other words, almost as many Republicans stayed home as Democrats voted for Doug Jones.

The message this sends is far, far more important than the outcome. Even in a time of extreme polarization, with a slim Senate majority, Alabama signaled that the Old South is dying without declaring that it is changing its politics or its New South identity. It’s still conservative. It’s still pro-life. And if Doug Jones serves Alabama as a progressive, his half-term will be his last term.

I write this piece in my Tennessee office, located only a few miles from the Alabama border. I saw the Old South/New South conflict play out in my own community, with my own neighbors. There were those who shared Roy Moore’s resentments and were untroubled by his bigotries. But there were many others who looked at him and said, “This is not who we are.” They asked their Alabama friends and family members to send a message, and the message was sent.

It’s become something of a cliche to say that “everything is terrible.” America is divided. Americans are bitter. Social media is toxic, and our political culture seems broken. But it turns out that everything isn’t terrible. There is a reservoir of decency in in the state and region of my birth, and last night that decency was on full display. The New South confronted the Old South, and the New South won again.

— David French is a senior writer for National Review , a senior fellow at the National Review Institute, and an attorney.

R oy Moore did the nearly impossible and lost an Alabama Senate seat for the Republican party.

Only a historically flawed candidate could have managed it, and Roy Moore fit the bill. Twice bounced from the Alabama supreme court, prone to kooky and noxious views, ignorant of the law and public policy, Moore was already a shaky electoral bet even before allegations from multiple women emerged that he had dated or forced his attentions on them when he was a grown man and they were teenagers. Moore’s denials were tinny, contradictory, and unconvincing.

A swath of the GOP tried to do the prudent and decent thing and force Moore from the race in favor of a write-in candidate. But Moore, who has made a career of poor judgment, insisted that he wouldn’t leave. Probably only President Trump had the sway to get him out of the race. After a brief period of sitting on the fence, Trump decided to back Moore, under the influence of his cut-rate Svengali Steve Bannon, who never met a disreputable political candidate he didn’t like.

Trump and Bannon thought they were cleverly getting in front of the parade of an inevitable Moore victory, in ruby-red Alabama. Instead, they associated themselves with a man credibly accused of preying on young girls and got rebuked by Alabama voters whose standards weren’t as low as theirs.

There are several obvious lessons from Alabama: Character still matters; if he gets his way in other primary battles, Steve Bannon could help throw away other winnable Senate races (and depose Mitch McConnell as Senate majority leader not by electing Republicans hostile to him, but by destroying the GOP Senate majority altogether); Democratic constituents are, as we also saw in Virginia, highly mobilized, and Republicans will need impressive candidates and campaigns to try to survive next year’s mid-terms; Donald Trump would be well-served to listen to political advisors who aren’t, like Bannon, hoping to tear down the GOP for fun and profit.

If the GOP takes the right lessons from the debacle in Alabama, it will have served some purpose. Otherwise, the party will have suffered a stinging self-inflicted defeat, with others sure to follow.

— Get insight from the best conservative writers delivered to your inbox; sign up for National Review Online ’s newsletters today .

F or a year now, there’s been a myth among Republicans: the Legend of Trump.

It goes something like this. Once upon a time, there was an unbeatable candidate, a world-famous politician whose husband had been president, who received unquestioning loyalty from the media. Then came the Dragonslayer: a real-estate mogul with a toilet of gold and a tongue of iron, who cut the unconquerable evil queen down to size and seized the throne from her. The laws of political gravity simply didn’t apply to him: He could utter any vulgarity, brazen through any scandal, batter down any media infrastructure. And if Republicans followed him — if they lit their torches from his — they too could slay dragons.

Now, it’s quite possible that Donald Trump was the only Republican who could have defeated Hillary Clinton —other Republicans might have tried to take the high ground with a candidate significantly dirtier than the local garbage dump. Trump has no tact and no compunction, so he was always willing to drag her off her high horse. But Trump truly won not because he was a stellar candidate — far from it — but because Hillary Clinton was an awful candidate. And this means not only that his dragonslaying isn’t duplicable, but also that other candidates with similarly shady backgrounds who attempt to imitate him will end up failing dramatically.

In other words, the laws of political gravity still apply.

We learned that last night in Alabama, where Roy Moore lost an unlosable Senate race in a state that just three years ago went 97 percent for an unopposed Senator Jeff Sessions, who gave up his seat to become Trump’s much-maligned attorney general. Moore ran the worst campaign in recent memory, and he lost because of it. Republicans weren’t going to show up in droves to vote for a man credibly accused of child molestation, a fellow who deployed his campaign spokespeople to explain that Muslims can’t sit in Congress and that homosexuals ought to wind up in prison.

Moore was already in a dogfight before the sexual-abuse allegations. And he attempted to Trump his way out of those allegations: He stonewalled, he insisted it was all a media witch hunt, he shouted “establishment” over and over. He even called in the Dragonslayer himself, who tweeted from on high and rallied on the Alabama border. And Moore lost.

Trumpism, it turns out, isn’t a philosophy. It’s just a man who ran and won against the most unpalatable candidate in modern American history. That’s an incredible accomplishment. It’s not a strategy.

Yet the wandering minstrels will continue to sing the Legend of Trump for donors near and far. They’ll continue to suggest that Trumpism is a sword in a stone, ready to be plucked up and used against the “establishment” by any person brave enough to wield it. They’ll never define the “establishment” — they’ll ignore that Trump’s agenda will now be stymied thanks to their own brave endorsement of an “anti-establishment” candidate. They’ll blame Mitch McConnell for their own support of a wildly execrable candidate. They’ll never define “nationalist populism”; they’ll just state that anyone who opposes it opposes “the people.”

Leading them will be Breitbart’s Steve Bannon, a man who made Moore his avatar — a man who desperately wishes for fame and power, but can achieve it only on the back of others’ accomplishments. The only way he can preserve any impression of power is to blame others for his own shortcomings, and to preserve the Legend. But the Legend died in Alabama if it hadn’t already died in Virginia. Donors would be fools to trust Bannon.

Which isn’t to say they won’t. Many Republicans are still invested in the Legend of Trump. To acknowledge reality — to state simply that Trump did something amazing, but that he also had the help of a horrifying Democrat, and to recognize openly that Republicans will have to do better if they hope to win in the future — is uncomfortable. Better to pretend that Republicans have no serious problem outside of a few virtue-signaling cucks who wouldn’t turn out to vote for a guy who allegedly cruises the food court for dates.

Democrats had a similar legend until 2016: the Legend of Barack and Hillary. The Tea Party slew it long before Democrats were willing to acknowledge its death; 2016 was the final blow to the leftist mythology. Legends in politics never fade away: They die violent deaths. Republicans can learn their lesson now or they can take their lumps later.

— Ben Shapiro is the editor in chief of the Daily Wire.

T ell me if this scenario sounds familiar: Two Republican senators concerned with rising tax burdens on families propose a child tax credit (CTC) that would be refundable against both income and payroll taxes. Party leaders tout the plan as evidence of their pro-family bona fides and include it in their larger tax plan.

Business groups soon realize that the cost of the tax credit will hinder their ability to get the large business-tax cuts they seek and quietly push party leaders to drop the provisions allowing families to apply the credit against their payroll-tax liability. The Wall Street Journal cheers them on. Angry pro-family groups accuse them of selling out working-class families to appease wealthy donors, but their complaints are ignored.

The scenario I just described played out over two decades ago, when Senators Rod Grams (R., Minn.) and Dan Coats (R., Ind.) spearheaded efforts to introduce a refundable child tax credit as the “crown jewel” of the Republicans’ famous Contract with America in 1995. They recognized that the primary tax burden on working-class families came from payroll rather than income taxes, making refundability crucial to providing tax relief to those who most needed it.

According to an internal staff memo written at the time, this provision became a point of contention in a larger “battle between family tax relief and business tax relief,” which made it “vulnerable to the wandering eyes of Senators who wish to beef up aspects on side of business tax relief.” Every dollar directed to refundability meant a dollar less to put toward reducing what they saw as an onerous capital-gains tax rate.

Business groups were making a fair point at the time. Sixty-five percent of the Republican tax package was aimed at families, while only 35 percent was aimed at businesses. Grams and Coats acquiesced on refundability as a modest compromise. Business groups pressured Congress to shrink the CTC even further, but a bipartisan coalition of pro-family social conservatives and anti-poverty liberals worked together to staunchly defend it. The result was the introduction of a wildly popular $500 child tax credit as part of the Taxpayer Relief Act of 1997.

Fast forward to 2017. Senators Marco Rubio (R., Fla.) and Mike Lee (R., Utah) have picked up where Grams and Coats left off. Seeking to fulfill the promise made by Republicans in the Contract with America, they have led efforts to increase the value of the CTC and make it refundable against payroll taxes, as originally intended. Like clockwork, business groups have set out to undermine Rubio and Lee, seeing the CTC proposal as a threat to business priorities such as corporate tax cuts.

The key difference is that the current tax bill has swung the family-business pendulum hard in the other direction, with businesses receiving 70 percent of the tax relief and families receiving only 30 percent. Recognizing the problem, Rubio and Lee filed an amendment to remedy this imbalance. It would have made the proposed $2,000 CTC refundable against both income and payroll taxes and paid for it by reducing the corporate income-tax rate from 35 percent to 21 percent instead of the planned 20 percent.

This modest compromise — a mere percentage point for families — would have provided the meaningful tax relief many working-class families were denied over 20 years ago and still left the U.S with the lowest corporate tax rate in North America. Nevertheless, business groups became apoplectic.

The only hope for working-class families was that the same unlikely team of pro-family social conservatives and anti-poverty liberals who had worked together to introduce the CTC back in 1997 would rally around the Rubio-Lee amendment.

The Senate could have put families first. Instead, it put politics first.

Evangelical leaders could have reminded Republicans that they were elected to work on behalf of the working and middle class, not the donor class. Anti-poverty liberals could have pushed Democratic senators to support the Rubio-Lee amendment even if they would vote against the final tax bill later. The Senate could have put families first. Instead, it put politics first. The Rubio-Lee amendment was defeated, 29–71. It was a bipartisan slap in the face to families.

Rather than reduce corporate tax cuts by 1 percentage point for families, the Republican leadership chose to champion the interests of their donors. Rather than make a principled effort to help their struggling constituents, the Democratic leadership chose to actively oppose good changes to embarrass the Republicans.

The coming days will be filled with debates about who is responsible for the major flaws in the tax-reform bill. One thing is clear, though: The shameful failure of Republicans and Democrats to work together is responsible for the absence of CTC reforms that would have put the needs of the working class over those of the donor class.

— Joshua T. McCabe is assistant dean of social sciences at Endicott College and author of The Fiscalization of Social Policy: How Taxpayers Trumped Children in the Fight Against Child Poverty (forthcoming from Oxford University Press).

C onservatives face enough issues on college campuses without creating conflicts.

Late last week, conservative writer Kassy Dillon, of Campus Reform, reported that an employee at a Fordham University coffee shop asked students to leave because they were conservatives. The facts reveal a slightly different situation.Here’s what happened. Several members of Fordham University College Republicans went to Rodrigue’s Coffee House on campus. One or more of the students purchased a cup of coffee, and the group sat down in the shop. Aaron Spring, one of the students involved, said to Fox News that they went to the coffee shop to have a political conversation with one another. Some time later, an employee of the coffee shop was filmed asking the students to leave, saying their Make America Great Again hats were against the policy of the coffee shop and they had five minutes to get out. The College Republicans left some time later.

This was no random latte run, however. The reason the College Republicans chose this particular coffee shop was to test the tolerance of the store’s “safe space” policy. Rodrigue’s Coffee Shop is a student-run club that has an extensive conduct policy, which is outlined on their Facebook page and on laminated cards through the shop. An excerpt is produced below:

RODRIGUE’S STRIVES TO BE A SAFER SPACE ON FORDHAM’S CAMPUS. We welcome diversity and we encourage all those participating in the Rodrigue’s community to express themselves creatively and respectfully. As such, we urge everyone in the space to be aware of their own identity, and considerate of the personhood of their peers. For these reasons, consider the following: Do not make assumptions about someone’s gender, sexuality, race, class, or experiences. Be aware of the ways in which your words and actions impact others. Be aware of the boundaries of others’ space, physical or otherwise, and respect their consent. No racism — No sexism — No homophobia. Please understand that the above list is by no means exhaustive; these are only basic guidelines to help foster a safer space and a more inclusive community in Rodrigue’s. Ideas and actions that intend to violate any of the above are not welcome.

The policy is general, but conversations with students on campus suggest that it is known for having a very progressive clientele.

Fordham, for its part, released a statement distancing itself from the policy of the shop:

There is no University safe space policy, nor one that excludes any members of the Fordham community from any public spaces on the basis of their political views, Fordham is a community that values diverse opinions, and in which students should disagree with one another in a civil fashion. The University is still investigating the incident, and students who may have violated University code of conduct will be met with the appropriate student conduct process.

The validity of safe spaces aside, the story is clear: A group of conservatives wanted to prove they could rile up their fellow students by wearing MAGA hats at a progressive campus hotspot. They proved their hypothesis but won over no new conservatives.

Conservatism and College Republicans need to be about more than pissing off classmates. Showcasing how liberals can be upset by Trumpism does nothing but entrench the status quo. As conservatism is dwarfed on college campuses, students can be either a nagging spur or proponents of a tempting ideology. Invading spaces that are reserved for particular students only exacerbates issues.

Symbols matter. Conservatism needs to have a shelf life longer than President Trump. The president’s statements, combined with the media united behind the idea that Make America Great Again caps represent a host of negative connotations, make confronting liberals with them problematic and useless. Students know what these symbols mean to others, even if they disagree with them. Wearing MAGA hats only serves the purpose of angering some progressives and alienating others. If conservatism is to survive beyond the Trump presidency, it needs to have an appealing message.

Symbols matter. Conservatism needs to have a shelf life longer than President Trump.

Some conservatives have been drawn to tactics unfit for conservatism. While we may reject the premise of safe spaces, there is some reason for their existence. It’s partly a fault of conservatives for not adequately speaking to the needs of people of color or seeking common ground with those with whom we disagree. But rather than invading a safe space to provoke conflict, conservatives need to enable positive discourse with peers and work to counteract the notion that ours is an ideology persecuting others.

Tactics like this elicit one of two things, but rarely both. In our example, the viral video could make others aware of safe spaces and potential thought discrimination. Those sympathetic to the employee could question the validity of such a safe-space policy. In this case, however, neither is achieved. Most conservatives already know that safe spaces exist on college campuses. Some liberals have visceral reactions to MAGA hats. Nothing new here. No one was made better off. No one changed their minds.

Winning converts comes by retaking the high ground, not by working fellow classmates into a lather in coffee shops. Especially in an environment where conservatives are wildly outgunned in media and on college campuses, we need credibility to be at an all-time high in our engagements with those we disagree.

Intentionally creating situations to anger other students is inappropriate and unnecessary. Consider this: Which wins more converts to Christianity: the guy in Times Square yelling “You’re all going to hell if you don’t convert to Jesus!” or less demonstrative Christians exhibiting love and compassion to neighbors? Remember Mike Pence’s line: “I’m a conservative, but I’m not angry about it.”

The conservatism that we should be pushing is one that draws people in with better arguments. Conservatism can be an ideology that stirs up animosity with gimmicks, or one that wins converts with ideas.

— Tyler Grant is a lawyer in Washington, D.C. (Twitter: @The_Tyler_Grant)

T he terrorism charges filed in Manhattan federal court on Tuesday against the Port Authority jihadist underscore that, while the rhetoric from the White House is different, the change of administrations from Obama to Trump has not altered the Justice Department’s approach to terrorism: It is regarded principally as a law-enforcement issue, and its connection to Islamic doctrine goes studiously unnoticed.

The five-count complaint filed by the U.S. attorney for the Southern District of New York charges that Akayed Ullah, by attempting a mass-murder attack, materially supported the Islamic State terror network (ISIS). Ullah was scorched in the blast, and a few people near him suffered relatively minor injuries. Fortunately, he failed to kill and maim, as he told police he had hoped to do, a goal made manifest by his plan to strike at the height of the morning rush in one of the world’s busiest transportation hubs.

The 27-year-old Bangladesh native became a permanent resident alien through a combination of reckless government immigration policies — the visa lottery, by which his uncle got in, and chain migration, which enabled Ullah to follow in 2011. The complaint alleges that he was “inspired” by the Islamic State, law-enforcement parlance for a terrorist who is not a member of the “inspirational” jihadist organization or otherwise directed by it (i.e., no “operational” connection).

In the politically correct fashion that confuses the medium with the message, the government asserts that Ullah’s “radicalization” began three years ago and consisted of “view[ing] pro-ISIS materials online.” In other words, the Internet is the culprit.

Of course, it is actually sharia-supremacist ideology that “radicalizes” young Muslims. Typically, they imbibe it not merely through the Internet but by immersing themselves at extremist mosques and in communities in which the ideology is prevalent. Regardless of how the ideology is conveyed, it is the fervor of religious obligation that it incorporates, not the fact that it is easily found online, that explains its power. To grasp this, it is necessary to face up to the fact that the ideology is drawn from Islamic scripture and supported by centuries of fundamentalist scholarship. It is a frightening construction of Islam, but a well-rooted one, which is why so many devout Muslims fall prey to it.

Alas, it remains verboten in the Justice Department to acknowledge the obvious. The complaint thus tells us that Ullah was taken in by “violent extremist ideology” — as if he could as easily have been “inspired” by Antifa as by ISIS.

As adumbrated in my Monday column, no thought was given to treating Ullah as an alien combatant, which would have allowed him to be detained and interrogated under the laws of war. From the first, he was regarded as a criminal defendant in a judicial proceeding, with all the due-process protections that entails. The complaint explains that, upon being rushed to Bellevue Hospital for treatment of his injuries, he received Miranda warnings. He proceeded to tell investigators, “I did it for the Islamic State.”

Prior to conducting the attack, Ullah posted a statement on Facebook: “Trump you failed to protect your nation.” Agent Joseph Cerciello of the Department of Homeland Security, who swore out the complaint, deadpans, “Based on my training and experience, I understand Ullah to have been referring to the current President of the United States.”

In subsequently searching his Brooklyn apartment, agents found his passport, annotated in handwriting, “ O America, die in your rage. ”

The device the incompetent terrorist used was a crude pipe bomb, to be detonated by a nine-volt battery rigged up to a Christmas-tree lightbulb. He used plastic zip-ties to attach it to himself. Ullah told police he had begun pulling the components together in the last three weeks and constructed the bomb in his apartment about a week ago. (Reportedly, his brother, with whom he has done electrical work, lives in the same building.) The complaint says Ullah filled the pipe “with explosive material that he created,” and added metal screws because he believed their high-speed ejection in the blast would maximize the carnage.

Ullah, whose atrocity is on video and indefensible, will never again walk free.

By framing the case in the “material support to terrorism” charge (count one), prosecutors seek to highlight the barbaric actions of ISIS — if not its doctrinal underpinnings. Ullah is also charged with using a weapon of mass destruction (count two), bombing a public place (count three), and destroying property by means of fire and an explosive (count four).

For good measure, the government adds a charge (count five) of using a destructive device during a violent crime. The complaint is not the final, formal charge; it is just a means of explaining the probable cause for the arrest and holding the defendant while the grand jury considers the case. We will have to see, then, if count five is included when the case is eventually indicted: Among the violent crimes during which Ullah is said to have used the destructive device is . . . the use of a destructive device charged in count two. Though the statutes cited in counts two and five are different, the defense would no doubt contend that the offense is essentially the same, and thus that to charge it twice would violate double-jeopardy principles.

It will make no difference. The two bombing charges (counts two and three) each carry potential sentences of life imprisonment, and the remaining charges top out between 20 and 30 years’ incarceration. Ullah, whose atrocity is on video and indefensible, will never again walk free.

— Andrew C. McCarthy is a senior fellow at the National Review Institute and a contributing editor of National Review .

Editor’s Note: This piece originally appeared in City Journal and is reprinted here with permission.

A rticles about America’s high levels of child poverty are a media evergreen. Here’s a typical entry, courtesy of the New York Times’s Eduardo Porter: “The percentage of children who are poor is more than three times as high in the United States as it is in Norway or the Netherlands. America has a larger proportion of poor children than Russia.” That’s right: Russia.

Outrageous as they seem, the assertions are true — at least in the sense that they line up with official statistics from government agencies and reputable nongovernmental organizations such as the OECD and UNICEF. International comparisons of the sort that Porter makes, though, should be accompanied by a forest of asterisks. Data limitations, varying definitions of poverty, and other wonky problems are rampant in these discussions.

The lousy child-poverty numbers should come with another qualifying asterisk, pointing to a very American reality. Before Europe’s recent migration crisis, the United States was the only developed country consistently to import millions of very poor, low-skilled families, from some of the most destitute places on earth — especially from undeveloped areas of Latin America — into its communities, schools, and hospitals. Let’s just say that Russia doesn’t care to do this — and, until recently, Norway and the Netherlands didn’t, either. Both policymakers and pundits prefer silence on the relationship between America’s immigration system and poverty, and it’s easy to see why. The subject pushes us headlong into the sort of wrenching trade-offs that politicians and advocates prefer to avoid. Here’s the problem in a nutshell: You can allow mass low-skilled immigration, which many on the left and the right — and probably most poverty mavens — consider humane and quintessentially American. But if you do, pursuing the equally humane goal of substantially reducing child poverty becomes a lot harder.

In 1964, the federal government settled on a standard definition of poverty: an income less than three times the value of a hypothetical basic food basket. (That approach has its flaws, but it’s the measure used in the United States, so we’ll stick with it.) Back then, close to 23 percent of American kids were poor. With the important exception of the years between 1999 and 2007 — following the introduction of welfare reform in 1996 — when it declined to 16 percent, child poverty has bounced within three points of 20 percent since 1980. Currently, about 18 percent of kids are below the poverty line, amounting to 13,250,000 children. Other Anglo countries have lower child-poverty rates: The OECD puts Canada’s at 15 percent, with the United Kingdom and Australia lower still, between 11 percent and 13 percent. The lowest levels of all — under 10 percent — are found in the Nordic countries: Denmark, Norway, Iceland, and Finland.

How does immigration affect those post-1964 American child-poverty figures? Until 1980, it didn’t. The 1924 Immigration Act sharply reduced the number of immigrants from poorer Eastern European and southern countries, and it altogether banned Asians. (Mexicans, who had come to the U.S. as temporary agricultural workers and generally returned to their home country, weren’t imagined as potential citizens and thus were not subject to restrictive quotas.) The relatively small number of immigrants settling in the U.S. tended to be from affluent nations and had commensurate skills. According to the Migration Policy Institute, in 1970, immigrant children were less likely to be poor than were the children of native-born Americans.

By 1980, chiefly because of the 1965 Immigration and Naturalization Act, the situation had reversed: Immigrant kids were now poorer than native-born ones. That 1965 law, overturning the 1924 restrictions, made “family preference” a cornerstone of immigration policy — and, as it turned out, that meant a growing number of new Americans hailing from less-developed countries and lacking skills. The income gap between immigrant and native children widened. As of 1990, immigrant kids had poverty rates 50 percent higher than their native counterparts. At the turn of the millennium, more than one-fifth of immigrant children, compared with just 9 percent of non-Hispanic white kids, were classified as poor. Today, according to Center for Immigration Studies estimates, 31.1 percent of the poor under 18 are either immigrants or the American-born kids of immigrant parents.

Perhaps the most uncomfortable truth about these figures, and surely one reason they don’t often show up in media accounts, is that a large majority of America’s poor immigrant children — and, at this point, a large fraction of all its poor children — are Hispanic (see chart below). The U.S. started collecting separate poverty data on Hispanics in 1972. That year, 22.8 percent of those originally from Spanish-language countries of Latin America were poor. The percentage hasn’t risen that dramatically since then; it’s now at 25.6 percent. But because the Hispanic population in America quintupled during those years, these immigrants substantially expanded the nation’s poverty rolls. Hispanics are now the largest U.S. immigrant group by far — and the lowest-skilled. Pew estimates that Hispanics accounted for more than half the 22-million-person rise in the official poverty numbers between 1972 and 2012. Robert Samuelson of the Washington Post found that, between 1990 and 2016, Hispanics drove nearly three-quarters of the increase in the nation’s poverty population from 33.6 million to 40.6 million.

Ironically, then, at the same time that America’s War on Poverty was putting a spotlight on poor children, the new immigration system was steadily making the problem worse. In 1980, only 9 percent of American children were Hispanic. By 2009, that number had climbed to 22 percent. Almost two-thirds of these children were first- or second-generation immigrants, most of whose parents were needy. Nowadays, 31 percent of the country’s Hispanic children are in poverty. That percentage remains somewhat lower than the 36 percent of black children who are poor, true; but because the raw number of poor Hispanic kids — 5.1 million — is so much higher (poor black children number 3.7 million), they make up by far the largest group in the child-poverty statistics. As of 2016, Hispanic children account for more than one-third of America’s poor children. Between 1999 and 2008 alone, the U.S. added 1.8 million children to the poverty rolls; the Center for Immigration Studies reports that immigrants accounted for 45 percent of them.

Let’s be clear: Hispanic immigration isn’t the only reason that the U.S. has such troubling child-poverty rates. Other immigrant groups, such as North Africans and Laotians, add to the ranks of the under-18 poor. And American Indians have the highest rates of child poverty of all ethnic and racial groups. These are relatively small populations, however; combine Indians and Laotians, and you get fewer than a half-million poor children — a small chunk of the 14-plus-million total.

Even if we were following the immigration quotas set in 1924, the U.S. would be something of a child-poverty outlier. The nation’s biggest embarrassment is the alarming percentage of black children living in impoverished homes. Unsurprisingly, before the civil-rights movement, the numbers were higher; in 1966, almost 42 percent of black kids were poor. But those percentages started to improve in the later 1960s and in the 1970s. Then they soared again. By the 1980s and early 1990s, black child poverty was hovering miserably between 42 percent and almost 47 percent. Researchers attribute the lack of progress to the explosion in single-parent black families and welfare use. The current percentage of black kids living with a single mother — 66 percent — far surpasses that of any other demographic group. The 1996 welfare-reform bill and a strong economy helped bring black child poverty below 40 percent, a public-policy success — but the numbers remain far too high.

Immigrant poverty, though usually lumped within a single “child-poverty” number, belongs in a different category from black or Native American poverty. After all, immigrants voluntarily came to the United States, usually seeking opportunity. And immigrants of the past often found it. The reality of American upward mobility helps explain why, despite real hardships, poor immigrant childhood became such a powerful theme in American life and literature. Think of classic coming-of-age novels such as Betty Smith’s A Tree Grows in Brooklyn (about Irish immigrants), Henry Roth’s Call It Sleep (Jewish immigrants), and Paule Marshall’s Brown Girl, Brownstones (West Indians), all set in the first decades of the 20th century. With low pay, miserable work conditions, and unreliable hours, the immigrant groups that such novels depicted so realistically were as poor as — and arguably more openly discriminated against than — today’s Mexicans or Bangladeshis.

Their children, though, didn’t need a ton of education to leave the hard-knocks life behind. While schools of that era were doubtless more committed to assimilating young newcomers than are today’s diversity-celebrating institutions, sky-high dropout rates limited their impact. At the start of the 20th century, only 5 percent of the total population graduated from high school; the rate among immigrants would have been even lower. That doesn’t mean that education brought no advantages. Though economist George Borjas notes that endemic truancy and interrupted studies had ripple effects on incomes into following generations, the pre–World War II industrial economy offered a “range of blue collar opportunities” for immigrant children, as sociologists Roger Waldinger and Joel Perlman observe, and it required “only modest educations to move a notch or two above their parents.” It may have taken more than one generation, but most immigrant families could expect, if not Horatio Alger–style ascents, at least middle-class stability over time.

America’s economy has transformed in ways that have blocked many of the avenues to upward mobility available to the immigrant families of the past. The kind of middle-skilled jobs that once fed the aspirations of low-income strivers are withering. “Modest educations” will no longer raise poor immigrant children above their parents’ station. Drop out of high school, and you’ll be lucky to be making sandwiches at a local deli or cleaning rooms at a Motel 6. Even a high-school diploma can be a dead end, unless supplemented by the right kind of technical training. Get a college degree, however, and it is a different, happier, story.

Yes, some immigrant groups known for their obsessional devotion to their children’s educational attainment (Chinese and Vietnamese immigrants come to mind) still have a good shot at middle-class stability, even though the parents typically arrive in America with little skill or education and, working in low-wage occupations, add to poverty numbers in the short term. But researchers have followed several generations of Hispanics — again, by far the largest immigrant group — and what they’ve found is much less encouraging. Hispanic immigrants start off okay. Raised in the U.S., the second generation graduates from high school and goes to college at higher rates than its parents, and it also earns more, though it continues to lag significantly behind native-born and other immigrant groups in these outcomes. Unfortunately, the third generation either stalls, or worse, takes what the Urban Institute calls a “U-turn.” Between the second and third generation, Hispanic high-school dropout rates go up and college-going declines. The third generation is more often disconnected — that is, neither attending school nor employed. Its income declines; its health, including obesity levels, looks worse. Most disturbing, as we look to the future, a third-generation Hispanic is more likely to be born to a single mother than were his first- or second-generation predecessors. The children of single mothers not only have high poverty rates, regardless of ethnic or racial background; they’re also less likely to experience upward mobility, as a mountain of data shows.

The Hispanic “U-turn” probably has many causes. Like most parents these days, Hispanics say that they believe that education is essential for their children’s success. Cultural norms that prize family and tradition over achievement and independence often stand in the way. According to a study in the Hispanic Journal of Behavioral Sciences, Hispanic parents don’t talk and read to their young children as much as typical middle-class parents, who tend to applaud their children’s attempts at self-expression, do; differences in verbal ability show up as early as age two. Hispanic parents of low-achieving students, most of whom also voiced high academic hopes for their kids, were still “happy with their children’s test scores even when the children performed poorly.” Their children tended to be similarly satisfied. Unlike many other aspiring parents, Hispanics are more reluctant to see their children travel to magnet schools and to college. They also become parents at younger ages. Though Hispanic teen birthrates have fallen — as they have for all groups, apart from American Indians — they remain the highest in the nation.

The sheer size of the Hispanic population hinders the assimilation that might moderate some of these preferences. Immigrants have always moved into ethnic enclaves in the United States when they could, but schools and workplaces and street life inevitably meant mixing with other kinds, even when they couldn’t speak the same language. In many parts of the country, though, Hispanics are easily able to stick to their own. In fact, Generations of Exclusion, a longitudinal study of several generations of Mexican-Americans, found that a majority of fourth-generation Mexican-Americans live in Hispanic neighborhoods and marry other Hispanics.

Other affluent countries have lots of immigrants struggling to make it in a post-industrial economy. Those countries have lower child-poverty rates than we do — some much lower. But the background of the immigrants they accept is very different. Canada, New Zealand, and Australia are probably the best points of comparison. Like the United States, they are part of the Anglosphere and historically multicultural, with large numbers of foreign-born residents. However, unlike the U.S., they all use a points system that considers education levels and English ability, among other skills, to determine who gets immigration visas. The Brookings Institution’s Hamilton Project calculates that, while 30 percent of American immigrants have a low level of education — meaning less than a high-school diploma — and 35 percent have a college degree or higher, only 22 percent of Canadian immigrants lack a high-school diploma, while more than 46 percent have gone to college. (Canada tightened its points system after a government study found that a rise in poverty and inequality during the 1980s and 1990s could be almost entirely attributed to an influx of poorer immigrants.) Australia and New Zealand also have a considerably more favorable ratio of college-educated immigrants than does the United States. The same goes for the U.K.

The immigration ecosystem of the famously egalitarian Nordic countries also differs from the U.S.’s in ways that have kept their poverty numbers low. Historically, the Nordics didn’t welcome large numbers of greenhorns. As of 1940, for instance, only 1 percent of Sweden’s population was foreign-born, compared with almost 8.8 percent of Americans. After World War II, Nordic immigration numbers began rising, with most of the newcomers arriving from developed countries, as was the case in the U.S. until 1965. In Finland and Iceland, for instance, the plurality of immigrants today is Swedish and Polish, respectively. In Norway, the majority of immigrants come from Poland and Lithuania. Note that these groups have low poverty rates in the U.S., too.

Sweden presents the most interesting case, since it has been the most welcoming of the Nordic countries — and it has one of the most generous welfare states, providing numerous benefits for its immigrants. For a long time, the large majority of Sweden’s immigrants were from Finland, a country with a similar culture and economy. By the 1990s, the immigrant population began to change, though, as refugees arrived from the former Yugoslavia, Iran, and Iraq — populations with little in common culturally with Sweden and far more likely to be unskilled than immigrants from the European Union. By 2011, Sweden, like other European countries, was seeing an explosion in the number of asylum applicants from Syria, Afghanistan, and Africa; in 2015 and 2016, there was another spike. Sweden’s percentage of foreign-born has swelled to 17 percent — higher than the approximately 13 percent in the United States.

How has Sweden handled its growing diversity? We don’t have much reliable data from the most recent surge, but numbers from earlier this decade suggest the limits of relying on copious state benefits to acclimate cultural outsiders. In the U.S., immigrants are still more likely to be employed than are the native-born. In Sweden, the opposite holds. More than 26 percent of Swedish newcomers have remained unemployed long-term (for more than a year). Immigrants tend to be poorer than natives and more likely to fall back into poverty if they do surmount it. In fact, Sweden has one of the highest poverty rates among immigrants relative to native-born in the European Union. Most strikingly, a majority of children living in Sweden classified as poor in 2010 were immigrants.

Despite its resolute anti-poverty efforts, Sweden has, if anything, been less successful than the U.S. at bringing its second-generation immigrants up to speed. According to the OECD’s Programme for International Student Assessment (PISA) survey, Sweden has “declined over the past decade [between 2005 and 2015] from around average to significantly below average. . . . No other country taking part in PISA has seen a steeper fall.” The Swedish Education Agency reports that immigrant kids were responsible for 85 percent of a decline in school performance.

Outcomes such as these suggest that immigration optimists have underestimated the difficulty of integrating the less-educated from undeveloped countries, and their children, into advanced economies. A more honest accounting raises tough questions. Should the United States, as the Trump administration is proposing, and as is already the case in Canada and Australia, pursue a policy favoring higher-skilled immigration? Or do we accept higher levels of child poverty and lower social mobility as a cost of giving refuge and opportunity to people with none? If we accept such costs, does it even make sense to compare our child-poverty numbers with those of countries like Denmark or Sweden, which have only recently begun to take in large numbers of low-skilled immigrants?

Recent events in Denmark and Sweden put another question in stark relief. How many newcomers — especially from very different cultures — can a country successfully absorb, and on what timetable? A surge of asylum seekers beginning in 2015 forced both countries to introduce controls at their borders and limits to asylum acceptances. Their existing social services proved unable to cope with the swelling ranks of the needy; there was not enough housing, and, well, citizens weren’t always as welcoming as political leaders might have wished. The growing power of anti-immigrant political parties has shocked these legendarily tolerant cultures.

And yet one more question: How long can generous welfare policies survive large-scale low-skilled immigration? The beneficent Nordic countries are not the only ones that need to wonder. The National Academy of Sciences finds that immigration to America has an overall positive impact on the fiscal health of the federal government, but not so for the states and localities that must pay for education, libraries, some social services, and a good chunk of Medicaid. Fifty-five percent of California’s immigrant families use some kind of means-tested benefits; for natives, it’s 30 percent. The centrist Hamilton Project observes that high-immigrant states — California, New York, and New Jersey, among others — “may be burdened with costs that will only be recouped over a number of years, or, if children move elsewhere within the United States, may never fully be recovered.”

In short, confronting honestly the question of child-poverty rates in the United States — and, increasingly, such rates in other advanced countries — means acknowledging the reality that a newcomer’s background plays a vital role in immigrant success. Alternatively, of course, one can always fall back on damning worries about our current immigration system as evidence of racism. Remember November 8, 2016, if you want to know how that will play out.

P erhaps it is not the wisest use of resources to write apologetics for the hedge-fund industry in this populist era. Indeed, when both the Sanders-Warren wing of the Left and the Bannon-Trump wing of the Right have chosen the hedge-fund world as a target of ire, arguing for the virtue of the industry might seem like an exercise in futility. But proving the “virtue” of the industry is not necessarily the point; giving an accurate assessment of what hedge funds mean to our economy and how policymakers and the public ought to think about them is.

President Trump does not appear to disdain hedge-fund managers as much as his populist campaign rhetoric suggested. Major Trump donor and Breitbart financier Robert Mercer was the genius CEO of Renaissance Technologies. Trump’s secretary of commerce, Wilbur Ross, is one of the greatest distressed-asset investors in history. Trump’s eleven-day communications director, Anthony Scaramucci, was the founder and CEO of renowned fund-of-funds Skybridge Capital. Even Treasury Secretary Steven Mnuchin founded a hedge fund of his own, Dune Capital Management. In short, the brain trust of the hedge-fund community has a funny way of being very appealing when partisan or ideological interests are properly aligned.

On the surface, why hedge funds are demonized is hardly a mystery: The top 1 percent of hedge-funders make unbelievable gobs of money. Some are far-left in their political orientation (George Soros, Tom Steyer, Jim Simons), and some are far-right (Paul Singer, Bob Mercer), but those who achieve “celebrity status” tend to be exorbitantly wealthy, and are known to flaunt it. Images of Greenwich mansions as elite centers of money and power provoked powerful resentment as many Americans were coping with the financial crisis, and helped feed a narrative of class envy. That well over half of hedge funds do not survive at all, and that a significant number flutter around trying to find their way, goes ignored. The multi-billion dollar success stories are used to paint a picture of oligarchy, facts be damned.

A common criticism of hedge funds is that they charge excessively high fees for increasingly poor performance, a criticism that should warm the hearts of those who hold hedge funds in contempt. Indeed, if the goal was for all hedge funds to disappear, the best way to make that happen would be for them to charge high fees and to fail to justify such fees with performance. The buyers of hedge funds are sophisticated investors, pension plans, sovereign-wealth funds, endowments, and complex institutions — those most intolerant of incompetence. If it were true that hedge funds were failing to justify these buyers’ investments, critics of hedge funds would have nothing to sweat whatsoever — for surely the hedge-fund industry would deteriorate on its own. Perhaps what adherents of the “high fee, low performance” school of thought have not realized is that many buyers of hedge funds are happy to pay higher fees for better downside protection when markets turn south. Or perhaps buyers, rationally, do not compare the performance of hedge funds to that of the broad stock market in the middle of a screaming-bull equity market. Whether one is convinced or not that hedge funds are a valuable proposition, the fact remains that hedge funds operate according to the rules of a free-market economy — value must be consistently validated, or customers will vote with their feet. No hedge fund has made its fees through coercion.

So what is the reason to blast the hedge-fund space, once we realize that their fee calculus is actually a matter of market considerations? A popular criticism used to be that hedge funds added to the financial crisis, a theory that drips with irony. In fact, no hedge fund received a dollar of money from the Troubled Asset Relief Program; rather, the beneficiaries of TARP were the mainstream investment and commercial banks that over-levered their balance sheets to the hilt, then received a bailout from Congress when the assets they were holding lost value. Many hedge funds benefited in the aftermath of the crisis (those that bet on a recovery), but many tanked (those that bet the other way). It was an active manager’s playground, and risk-reward laws were well at play. Hedge funds posed no risk to the system whatsoever, and in fact, aided the economy a great deal: They provided liquidity, offered a two-sided market, and aided the process of price discovery when many esoteric derivatives were extremely hard to price. Theirs is a “put up or shut up” world, and access to Federal Reserve discount windows or taxpayer funds has never been available to hedge funds.

The irrational demonization of the hedge-fund industry has already brought about unintended consequences. After the financial crisis, Congress decided to prove that it was doing something by requiring hedge funds to disclose their positions on a quarterly basis in public filings. This increases risk, creating copycat investors, encouraging front-running, and causing other misallocated-capital decisions from those who see the funds’ positions without being privy to the entire thesis that justifies their bets.

The new tax-reform bill is another instance of Congress using hedge funds to prove it is doing something. The bill purports to target the so-called “carried-interest loophole,” whereby “carry” (or performance fees) is taxed as capital gains rather than ordinary income to the manager receiving the performance fee. The issue is a political hot potato, with significant emotion on both sides. Populists have demanded that Congress take away the benefit to money managers, with Trump himself having campaigned on this very issue. So what has Congress managed to do? The real beneficiaries of this provision, private-equity managers, will be largely unaffected, as the tax-reform bill retains the generous loophole for anyone who holds an asset for more than three years (the average private-equity holding period is well over five years). Those barely benefiting from the present law because of their shorter holding period — in other words, hedge funds — will bear the brunt of this provision. As Cliff Asness put it in an op-ed for the Wall Street Journal:

Private-equity managers will retain their large benefit while hedgies and other active managers will lose their small one. It’s the Casablanca approach: Identify unsympathetic parties and hassle them a bit, just to show you’re doing something, even if it’s a complete diversion.

Could the irony of tax reform be that it enhances global competitiveness for multinational corporations, while stifling competitiveness for investors in those same companies?

If our goal is a future investing landscape that rewards good risk-taking and punishes bad risk-taking — a future landscape that is prepared to intelligently allocate capital in what will certainly be a challenging period — then we need to rethink the demonization of the hedge-fund industry. No, these men and women are not all angels, but of course neither are the regulators, politicians, or big banks. However, hedge-fund managers and their firms represent the best innovation that the market system has come up with for providing risk-adjusted investment strategies to investors, without using the American taxpayer as a backstop. The best investment managers prudently seek out inefficiencies in a marketplace that is chock-full of them. That happens to fill major investing needs of the hour. Price discovery, enhanced efficiency, and increased liquidity are just some of the benefits we risk losing by demonizing and diminishing the alternative asset-management sector.

The populist soothing that uninformed hedge-fund bashing facilitates is not worth it.

— David L. Bahnsen runs a bicoastal wealth-management firm, is a National Review Institute trustee, and is the author of the forthcoming book Crisis of Responsibility.

W ith the divisiveness of tax reform now hidden behind the closed doors of a conference committee, Congress has returned to the only thing that restores bipartisanship on Capitol Hill: spending money.

After pushing the deadline off for another two weeks, Congress must now act by December 23 in order to avoid the by-now-routine threat of a partial government shutdown. That means we should expect the usual threats and predictions of disaster, all just in time for the holidays. Current reporting suggests that Congress is likely to gather the courage to extend the deadline all the way to mid-January, just in time for us to go through it all a second time. The ongoing kabuki theater would have long since become a bore if it were not so likely that taxpayers are about to once again pay the price.

Most of the public fighting will be over non-budgetary issues, including Democratic demands that some sort of protection for undocumented “DREAMers” be included in a continuing resolution. But behind the scenes, there is far more agreement. The big question will be how much to exceed the all-but-moribund sequester caps on domestic and defense spending. Republican are demanding an increase in military spending of roughly $54 billion above the sequester cap. Democrats appear more than willing to go along, if they receive a comparable increase in domestic spending. (Republicans have offered an increase in domestic spending of $37 billion above the cap.) The Democratic plans would increase spending by roughly $200 billion, while Republicans seek to hold that down to a mere $182 billion.

But even that may not be enough. The White House and some defense hawks are reportedly seeking an even bigger increase in defense spending, as much as $70 billion above the cap in 2018 and $80 billion in 2019. That would put total defense spending at more than $619 billion next year, the highest level since 2012.

Once Congress decides on how much to increase spending, it can finally get to work determining how to spend that money. None of the twelve annual appropriations bills have been passed yet, so we can expect January to bring another massive, pork-filled omnibus appropriation. Indeed, we may well see a repeat of the “cromnibus,” which combined both the CR and the omnibus spending bill. Is this any way to run a government?

The 2017 budget deficit is now expected to hit a devilish $666 billion. That’s up more than $80 billion from last year. The Congressional Budget Office predicts that we will return to the era of trillion-dollar deficits by 2022. The national debt is expected to rise from the current $20 trillion to more than $30 trillion by 2027.

And none of these projections account for the effects of the Republican tax plan. Even with a predicted increase in economic growth, the plan could push the deficit over $1 trillion as soon as next year, according to the Committee for a Responsible Federal Budget.

That tax cut, especially on the business side, is long overdue, and we should never fall into the trap of believing that our money somehow belongs to the government. Still, without commensurate reductions in spending, cutting taxes does nothing to shrink government or reduce its cost. It’s a recipe for long-term economic stagnation.

Regardless of where one falls on the political spectrum, one should expect two things from government: basic competence and a healthy respect for the taxpayers’ money. Sadly, Congress is failing once again in both regards.

Commenting on the news review the senior dating agency sign up. Website for dating.

Related News

Comments 0