This file has been deleted. Please return to the index and try again.
Monetary and Fiscal Policy[an error occurred while processing this directive]United States Economy
The role of government in the American economy extends far beyond its
activities as a regulator of specific industries. The government also
manages the overall pace of economic activity, seeking to maintain high
levels of employment and stable prices. It has two main tools for
achieving these objectives: fiscal policy, through which it determines
the appropriate level of taxes and spending; and monetary policy,
through which it manages the supply of money.
Much of the history of economic policy in
the United States since the Great Depression of the 1930s has involved a
continuing effort by the government to find a mix of fiscal and monetary
policies that will allow sustained growth and stable prices. That is no
easy task, and there have been notable failures along the way.
But the government has gotten better at
promoting sustainable growth. From 1854 through 1919, the American
economy spent almost as much time contracting as it did growing: the
average economic expansion (defined as an increase in output of goods
and services) lasted 27 months, while the average recession (a period of
declining output) lasted 22 months. From 1919 to 1945, the record
improved, with the average expansion lasting 35 months and the average
recession lasting 18 months. And from 1945 to 1991, things got even
better, with the average expansion lasting 50 months and the average
recession lasting just 11 months.
Inflation, however, has proven more
intractable. Prices were remarkably stable prior to World War II; the
consumer price level in 1940, for instance, was no higher than the price
level in 1778. But 40 years later, in 1980, the price level was 400
percent above the 1940 level.
In part, the government's relatively poor
record on inflation reflects the fact that it put more stress on
fighting recessions (and resulting increases in unemployment) during
much of the early post-war period. Beginning in 1979, however, the
government began paying more attention to inflation, and its record on
that score has improved markedly. By the late 1990s, the nation was
experiencing a gratifying combination of strong growth, low
unemployment, and slow inflation. But while policy-makers were generally
optimistic about the future, they admitted to some uncertainties about
what the new century would bring.
Fiscal Policy -- Budget and Taxes
The growth of government since the 1930s has been accompanied by
steady increases in government spending. In 1930, the federal government
accounted for just 3.3 percent of the nation's gross domestic product,
or total output of goods and services excluding imports and exports.
That figure rose to almost 44 percent of GDP in 1944, at the height of
World War II, before falling back to 11.6 percent in 1948. But
government spending generally rose as a share of GDP in subsequent
years, reaching almost 24 percent in 1983 before falling back somewhat.
In 1999 it stood at about 21 percent.
The development of fiscal policy is an
elaborate process. Each year, the president proposes a budget, or
spending plan, to Congress. Lawmakers consider the president's proposals
in several steps. First, they decide on the overall level of spending
and taxes. Next, they divide that overall figure into separate
categories -- for national defense, health and human services, and
transportation, for instance. Finally, Congress considers individual
appropriations bills spelling out exactly how the money in each category
will be spent. Each appropriations bill ultimately must be signed by the
president in order to take effect. This budget process often takes an
entire session of Congress; the president presents his proposals in
early February, and Congress often does not finish its work on
appropriations bills until September (and sometimes even later).
The federal government's chief source of
funds to cover its expenses is the income tax on individuals, which in
1999 brought in about 48 percent of total federal revenues. Payroll
taxes, which finance the Social Security and Medicare programs, have
become increasingly important as those programs have grown. In 1998,
payroll taxes accounted for one-third of all federal revenues; employers
and workers each had to pay an amount equal to 7.65 percent of their
wages up to $68,400 a year. The federal government raises another 10
percent of its revenue from a tax on corporate profits, while
miscellaneous other taxes account for the remainder of its income.
(Local governments, in contrast, generally collect most of their tax
revenues from property taxes. State governments traditionally have
depended on sales and excise taxes, but state income taxes have grown
more important since World War II.)
The federal income tax is levied on the
worldwide income of U.S. citizens and resident aliens and on certain
U.S. income of non-residents. The first U.S. income tax law was enacted
in 1862 to support the Civil War. The 1862 tax law also established the
Office of the Commissioner of Internal Revenue to collect taxes and
enforce tax laws either by seizing the property and income of non-payers
or through prosecution. The commissioner's powers and authority remain
much the same today.
The income tax was declared
unconstitutional by the Supreme Court in 1895 because it was not
apportioned among the states in conformity with the Constitution. It was
not until the 16th Amendment to the Constitution was adopted in 1913
that Congress was authorized to levy an income tax without
apportionment. Still, except during World War I, the income tax system
remained a relatively minor source of federal revenue until the 1930s.
During World War II, the modern system for managing federal income taxes
was introduced, income tax rates were raised to very high levels, and
the levy became the principal sources of federal revenue. Beginning in
1943, the government required employers to collect income taxes from
workers by withholding certain sums from their paychecks, a policy that
streamlined collection and significantly increased the number of
taxpayers.
Most debates about the income tax today
revolve around three issues: the appropriate overall level of taxation;
how graduated, or "progressive" the tax should be; and the
extent to which the tax should be used to promote social objectives.
The overall level of taxation is decided
through budget negotiations. Although Americans allowed the government
to run up deficits, spending more than it collected in taxes during the
1970s, 1980s, and the part of the 1990s, they generally believe budgets
should be balanced. Most Democrats, however, are willing to tolerate a
higher level of taxes to support a more active government, while
Republicans generally favor lower taxes and smaller government.
From the outset, the income tax has been a
progressive levy, meaning that rates are higher for people with more
income. Most Democrats favor a high degree of progressivity, arguing
that it is only fair to make people with more income pay more in taxes.
Many Republicans, however, believe a steeply progressive rate structure
discourages people from working and investing, and therefore hurts the
overall economy. Accordingly, many Republicans argue for a more uniform
rate structure. Some even suggest a uniform, or "flat," tax
rate for everybody. (Some economists -- both Democrats and Republicans
-- have suggested that the economy would fare better if the government
would eliminate the income tax altogether and replace it with a
consumption tax, taxing people on what they spend rather than what they
earn. Proponents argue that would encourage saving and investment. But
as of the end of the 1990s, the idea had not gained enough support to be
given much chance of being enacted.)
Over the years, lawmakers have carved out
various exemptions and deductions from the income tax to encourage
specific kinds of economic activity. Most notably, taxpayers are allowed
to subtract from their taxable income any interest they must pay on
loans used to buy homes. Similarly, the government allows lower- and
middle-income taxpayers to shelter from taxation certain amounts of
money that they save in special Individual Retirement Accounts (IRAs) to
meet their retirement expenses and to pay for their children's college
education.
The Tax Reform Act of 1986, perhaps the
most substantial reform of the U.S. tax system since the beginning of
the income tax, reduced income tax rates while cutting back many popular
income tax deductions (the home mortgage deduction and IRA deductions
were preserved, however). The Tax Reform Act replaced the previous law's
15 tax brackets, which had a top tax rate of 50 percent, with a system
that had only two tax brackets -- 15 percent and 28 percent. Other
provisions reduced, or eliminated, income taxes for millions of
low-income Americans.
Fiscal Policy and Economic Stabilization
In the 1930s, with the United States reeling from the Great
Depression, the government began to use fiscal policy not just to
support itself or pursue social policies but to promote overall economic
growth and stability as well. Policy-makers were influenced by John
Maynard Keynes, an English economist who argued in The General Theory
of Employment, Interest, and Money (1936) that the rampant
joblessness of his time resulted from inadequate demand for goods and
services. According to Keynes, people did not have enough income to buy
everything the economy could produce, so prices fell and companies lost
money or went bankrupt. Without government intervention, Keynes said,
this could become a vicious cycle. As more companies went bankrupt, he
argued, more people would lose their jobs, making income fall further
and leading yet more companies to fail in a frightening downward spiral.
Keynes argued that government could halt the decline by increasing
spending on its own or by cutting taxes. Either way, incomes would rise,
people would spend more, and the economy could start growing again. If
the government had to run up a deficit to achieve this purpose, so be
it, Keynes said. In his view, the alternative -- deepening economic
decline -- would be worse.
Keynes's ideas were only partially
accepted during the 1930s, but the huge boom in military spending during
World War II seemed to confirm his theories. As government spending
surged, people's incomes rose, factories again operated at full
capacity, and the hardships of the Depression faded into memory. After
the war, the economy continued to be fueled by pent-up demand from
families who had deferred buying homes and starting families.
By the 1960s, policy-makers seemed wedded
to Keynesian theories. But in retrospect, most Americans agree, the
government then made a series of mistakes in the economic policy arena
that eventually led to a reexamination of fiscal policy. After enacting
a tax cut in 1964 to stimulate economic growth and reduce unemployment,
President Lyndon B. Johnson (1963-1969) and Congress launched a series
of expensive domestic spending programs designed to alleviate poverty.
Johnson also increased military spending to pay for American involvement
in the Vietnam War. These large government programs, combined with
strong consumer spending, pushed the demand for goods and services
beyond what the economy could produce. Wages and prices started rising.
Soon, rising wages and prices fed each other in an ever-rising cycle.
Such an overall increase in prices is known as inflation.
Keynes had argued that during such periods
of excess demand, the government should reduce spending or raise taxes
to avert inflation. But anti-inflation fiscal policies are difficult to
sell politically, and the government resisted shifting to them. Then, in
the early 1970s, the nation was hit by a sharp rise in international oil
and food prices. This posed an acute dilemma for policy-makers. The
conventional anti-inflation strategy would be to restrain demand by
cutting federal spending or raising taxes. But this would have drained
income from an economy already suffering from higher oil prices. The
result would have been a sharp rise in unemployment. If policy-makers
chose to counter the loss of income caused by rising oil prices,
however, they would have had to increase spending or cut taxes. Since
neither policy could increase the supply of oil or food, however,
boosting demand without changing supply would merely mean higher prices.
President Jimmy Carter (1973-1977) sought
to resolve the dilemma with a two-pronged strategy. He geared fiscal
policy toward fighting unemployment, allowing the federal deficit to
swell and establishing countercyclical jobs programs for the unemployed.
To fight inflation, he established a program of voluntary wage and price
controls. Neither element of this strategy worked well. By the end of
the 1970s, the nation suffered both high unemployment and high
inflation.
While many Americans saw this
"stagflation" as evidence that Keynesian economics did not
work, another factor further reduced the government's ability to use
fiscal policy to manage the economy. Deficits now seemed to be a
permanent part of the fiscal scene. Deficits had emerged as a concern
during the stagnant 1970s. Then, in the 1980s, they grew further as
President Ronald Reagan (1981-1989) pursued a program of tax cuts and
increased military spending. By 1986, the deficit had swelled to
$221,000 million, or more than 22 percent of total federal spending.
Now, even if the government wanted to pursue spending or tax policies to
bolster demand, the deficit made such a strategy unthinkable.
Beginning in the late 1980s, reducing the
deficit became the predominant goal of fiscal policy. With foreign trade
opportunities expanding rapidly and technology spinning off new
products, there seemed to be little need for government policies to
stimulate growth. Instead, officials argued, a lower deficit would
reduce government borrowing and help bring down interest rates, making
it easier for businesses to acquire capital to finance expansion. The
government budget finally returned to surplus in 1998. This led to calls
for new tax cuts, but some of the enthusiasm for lower taxes was
tempered by the realization that the government would face major budget
challenges early in the new century as the enormous post-war baby-boom
generation reached retirement and started collecting retirement checks
from the Social Security system and medical benefits from the Medicare
program.
By the late 1990s, policy-makers were far
less likely than their predecessors to use fiscal policy to achieve
broad economic goals. Instead, they focused on narrower policy changes
designed to strengthen the economy at the margins. President Reagan and
his successor, George Bush (1989-1993), sought to reduce taxes on
capital gains -- that is, increases in wealth resulting from the
appreciation in the value of assets such as property or stocks. They
said such a change would increase incentives to save and invest.
Democrats resisted, arguing that such a change would overwhelmingly
benefit the rich. But as the budget deficit shrank, President Clinton
(1993-2001) acquiesced, and the maximum capital gains rate was trimmed
to 20 percent from 28 percent in 1996. Clinton, meanwhile, also sought
to affect the economy by promoting various education and job-training
programs designed to develop a highly skilled -- and hence, more
productive and competitive -- labor force.
Money in the U.S. Economy
While the budget remained enormously important, the job of managing
the overall economy shifted substantially from fiscal policy to monetary
policy during the later years of the 20th century. Monetary policy is
the province of the Federal Reserve System, an independent U.S.
government agency. "The Fed," as it is commonly known,
includes 12 regional Federal Reserve Banks and 25 Federal Reserve Bank
branches. All nationally chartered commercial banks are required by law
to be members of the Federal Reserve System; membership is optional for
state-chartered banks. In general, a bank that is a member of the
Federal Reserve System uses the Reserve Bank in its region in the same
way that a person uses a bank in his or her community.
The Federal Reserve Board of Governors
administers the Federal Reserve System. It has seven members, who are
appointed by the president to serve overlapping 14-year terms. Its most
important monetary policy decisions are made by the Federal Open Market
Committee (FOMC), which consists of the seven governors, the president
of the Federal Reserve Bank of New York, and presidents of four other
Federal Reserve banks who serve on a rotating basis. Although the
Federal Reserve System periodically must report on its actions to
Congress, the governors are, by law, independent from Congress and the
president. Reinforcing this independence, the Fed conducts its most
important policy discussions in private and often discloses them only
after a period of time has passed. It also raises all of its own
operating expenses from investment income and fees for its own services.
The Federal Reserve has three main tools
for maintaining control over the supply of money and credit in the
economy. The most important is known as open market operations, or the
buying and selling of government securities. To increase the supply of
money, the Federal Reserve buys government securities from banks, other
businesses, or individuals, paying for them with a check (a new source
of money that it prints); when the Fed's checks are deposited in banks,
they create new reserves -- a portion of which banks can lend or invest,
thereby increasing the amount of money in circulation. On the other
hand, if the Fed wishes to reduce the money supply, it sells government
securities to banks, collecting reserves from them. Because they have
lower reserves, banks must reduce their lending, and the money supply
drops accordingly.
The Fed also can control the money supply
by specifying what reserves deposit-taking institutions must set aside
either as currency in their vaults or as deposits at their regional
Reserve Banks. Raising reserve requirements forces banks to withhold a
larger portion of their funds, thereby reducing the money supply, while
lowering requirements works the opposite way to increase the money
supply. Banks often lend each other money over night to meet their
reserve requirements. The rate on such loans, known as the "federal
funds rate," is a key gauge of how "tight" or
"loose" monetary policy is at a given moment.
The Fed's third tool is the discount rate,
or the interest rate that commercial banks pay to borrow funds from
Reserve Banks. By raising or lowering the discount rate, the Fed can
promote or discourage borrowing and thus alter the amount of revenue
available to banks for making loans.
These tools allow the Federal Reserve to
expand or contract the amount of money and credit in the U.S. economy.
If the money supply rises, credit is said to be loose. In this
situation, interest rates tend to drop, business spending and consumer
spending tend to rise, and employment increases; if the economy already
is operating near its full capacity, too much money can lead to
inflation, or a decline in the value of the dollar. When the money
supply contracts, on the other hand, credit is tight. In this situation,
interest rates tend to rise, spending levels off or declines, and
inflation abates; if the economy is operating below its capacity, tight
money can lead to rising unemployment.
Many factors complicate the ability of the
Federal Reserve to use monetary policy to promote specific goals,
however. For one thing, money takes many different forms, and it often
is unclear which one to target. In its most basic form, money consists
of coins and paper currency. Coins come in various denominations based
on the value of a dollar: the penny, which is worth one cent or
one-hundredth of a dollar; the nickel, five cents; the dime, 10 cents;
the quarter, 25 cents; the half dollar, 50 cents; and the dollar coin.
Paper money comes in denominations of $1, $2, $5, $10, $20, $50, and
$100.
A more important component of the money
supply consists of checking deposits, or bookkeeping entries held in
banks and other financial institutions. Individuals can make payments by
writing checks, which essentially instruct their banks to pay given sums
to the checks' recipients. Time deposits are similar to checking
deposits except the owner agrees to leave the sum on deposit for a
specified period; while depositors generally can withdraw the funds
earlier than the maturity date, they generally must pay a penalty and
forfeit some interest to do so. Money also includes money market funds,
which are shares in pools of short-term securities, as well as a variety
of other assets that can be converted easily into currency on short
notice.
The amount of money held in different
forms can change from time to time, depending on preferences and other
factors that may or may not have any importance to the overall economy.
Further complicating the Fed's task, changes in the money supply affect
the economy only after a lag of uncertain duration.
Monetary Policy and Fiscal Stabilization
The Fed's operation has evolved over time in response to major
events. The Congress established the Federal Reserve System in 1913 to
strengthen the supervision of the banking system and stop bank panics
that had erupted periodically in the previous century. As a result of
the Great Depression in the 1930s, Congress gave the Fed authority to
vary reserve requirements and to regulate stock market margins (the
amount of cash people must put down when buying stock on credit).
Still, the Federal Reserve often tended to
defer to the elected officials in matters of overall economic policy.
During World War II, for instance, the Fed subordinated its operations
to helping the U.S. Treasury borrow money at low interest rates. Later,
when the government sold large amounts of Treasury securities to finance
the Korean War, the Fed bought heavily to keep the prices of these
securities from falling (thereby pumping up the money supply). The Fed
reasserted its independence in 1951, reaching an accord with the
Treasury that Federal Reserve policy should not be subordinated to
Treasury financing. But the central bank still did not stray too far
from the political orthodoxy. During the fiscally conservative
administration of President Dwight D. Eisenhower (1953-1961), for
instance, the Fed emphasized price stability and restriction of monetary
growth, while under more liberal presidents in the 1960s, it stressed
full employment and economic growth.
During much of the 1970s, the Fed allowed
rapid credit expansion in keeping with the government's desire to combat
unemployment. But with inflation increasingly ravaging the economy, the
central bank abruptly tightened monetary policy beginning in 1979. This
policy successfully slowed the growth of the money supply, but it helped
trigger sharp recessions in 1980 and 1981-1982. The inflation rate did
come down, however, and by the middle of the decade the Fed was again
able to pursue a cautiously expansionary policy. Interest rates,
however, stayed relatively high as the federal government had to borrow
heavily to finance its budget deficit. Rates slowly came down, too, as
the deficit narrowed and ultimately disappeared in the 1990s.
The growing importance of monetary policy
and the diminishing role played by fiscal policy in economic
stabilization efforts may reflect both political and economic realities.
The experience of the 1960s, 1970s, and 1980s suggests that
democratically elected governments may have more trouble using fiscal
policy to fight inflation than unemployment. Fighting inflation requires
government to take unpopular actions like reducing spending or raising
taxes, while traditional fiscal policy solutions to fighting
unemployment tend to be more popular since they require increasing
spending or cutting taxes. Political realities, in short, may favor a
bigger role for monetary policy during times of inflation.
One other reason suggests why fiscal
policy may be more suited to fighting unemployment, while monetary
policy may be more effective in fighting inflation. There is a limit to
how much monetary policy can do to help the economy during a period of
severe economic decline, such as the United States encountered during
the 1930s. The monetary policy remedy to economic decline is to increase
the amount of money in circulation, thereby cutting interest rates. But
once interest rates reach zero, the Fed can do no more. The United
States has not encountered this situation, which economists call the
"liquidity trap," in recent years, but Japan did during the
late 1990s. With its economy stagnant and interest rates near zero, many
economists argued that the Japanese government had to resort to more
aggressive fiscal policy, if necessary running up a sizable government
deficit to spur renewed spending and economic growth.
A New Economy?
Today, Federal Reserve economists use a number of measures to
determine whether monetary policy should be tighter or looser. One
approach is to compare the actual and potential growth rates of the
economy. Potential growth is presumed to equal the sum of the growth in
the labor force plus any gains in productivity, or output per worker. In
the late 1990s, the labor force was projected to grow about 1 percent a
year, and productivity was thought to be rising somewhere between 1
percent and 1.5 percent. Therefore, the potential growth rate was
assumed to be somewhere between 2 percent and 2.5 percent. By this
measure, actual growth in excess of the long-term potential growth was
seen as raising a danger of inflation, thereby requiring tighter money.
The second gauge is called NAIRU, or the
non-accelerating inflation rate of unemployment. Over time, economists
have noted that inflation tends to accelerate when joblessness drops
below a certain level. In the decade that ended in the early 1990s,
economists generally believed NAIRU was around 6 percent. But later in
the decade, it appeared to have dropped to about 5.5 percent.
Perhaps even more importantly, a range of
new technologies -- the microprocessor, the laser, fiber-optics, and
satellite -- appeared in the late 1990s to be making the American
economy significantly more productive than economists had thought
possible. "The newest innovations, which we label information
technologies, have begun to alter the manner in which we do business and
create value, often in ways not readily foreseeable even five years
ago," Federal Reserve Chairman Alan Greenspan said in mid-1999.
Previously, lack of timely information
about customers' needs and the location of raw materials forced
businesses to operate with larger inventories and more workers than they
otherwise would need, according to Greenspan. But as the quality of
information improved, businesses could operate more efficiently.
Information technologies also allowed for quicker delivery times, and
they accelerated and streamlined the process of innovation. For
instance, design times dropped sharply as computer modeling reduced the
need for staff in architectural firms, Greenspan noted, and medical
diagnoses became faster, more thorough, and more accurate.
Such technological innovations apparently
accounted for an unexpected surge in productivity in the late 1990s.
After rising at less than a 1 percent annual rate in the early part of
the decade, productivity was growing at about a 3 percent rate toward
the end of the 1990s -- well ahead of what economists had expected.
Higher productivity meant that businesses could grow faster without
igniting inflation. Unexpectedly modest demands from workers for wage
increases -- a result, possibly, of the fact that workers felt less
secure about keeping their jobs in the rapidly changing economy -- also
helped subdue inflationary pressures.
Some economists scoffed at the notion
American suddenly had developed a "new economy," one that was
able to grow much faster without inflation. While there undeniably was
increased global competition, they noted, many American industries
remained untouched by it. And while computers clearly were changing the
way Americans did business, they also were adding new layers of
complexity to business operations.
But as economists increasingly came to
agree with Greenspan that the economy was in the midst of a significant
"structural shift," the debate increasingly came to focus less
on whether the economy was changing and more on how long the
surprisingly strong performance could continue. The answer appeared to
depend, in part, on the oldest of economic ingredients -- labor. With
the economy growing strongly, workers displaced by technology easily
found jobs in newly emerging industries. As a result, employment was
rising in the late 1990s faster than the overall population. That trend
could not continue indefinitely. By mid-1999, the number of
"potential workers" aged 16 to 64 -- those who were unemployed
but willing to work if they could find jobs -- totaled about 10 million,
or about 5.7 percent of the population. That was the lowest percentage
since the government began collecting such figures (in 1970).
Eventually, economists warned, the United States would face labor
shortages, which, in turn, could be expected to drive up wages, trigger
inflation, and prompt the Federal Reserve to engineer an economic
slowdown.
Still, many things could happen to
postpone that seemingly inevitable development. Immigration might
increase, thereby enlarging the pool of available workers. That seemed
unlikely, however, because the political climate in the United States
during the 1990s did not favor increased immigration. More likely, a
growing number of analysts believed that a growing number of Americans
would work past the traditional retirement age of 65. That also could
increase the supply of potential workers. Indeed, in 1999, the Committee
on Economic Development (CED), a prestigious business research
organization, called on employers to clear away barriers that previously
discouraged older workers from staying in the labor force. Current
trends suggested that by 2030, there would be fewer than three workers
for every person over the age of 65, compared to seven in 1950 -- an
unprecedented demographic transformation that the CED predicted would
leave businesses scrambling to find workers.
"Businesses have heretofore
demonstrated a preference for early retirement to make way for younger
workers," the group observed. "But this preference is a relic
from an era of labor surpluses; it will not be sustainable when labor
becomes scarce." While enjoying remarkable successes, in short, the
United States found itself moving into uncharted economic territory as
it ended the 1990s. While many saw a new economic era stretching
indefinitely into the future, others were less certain. Weighing the
uncertainties, many assumed a stance of cautious optimism.
"Regrettably, history is strewn with visions of such `new eras'
that, in the end, have proven to be a mirage," Greenspan noted in
1997. "In short, history counsels caution."
United States Economy
Source: U.S. Department of State