Search This Blog

Friday, March 11, 2022

Deep Learning Is Hitting a Wall

What would it take for artificial intelligence to make real progress?

featured_image

Let me start by saying a few things that seem obvious,” Geoffrey Hinton, “Godfather” of deep learning, and one of the most celebrated scientists of our time, told a leading AI conference in Toronto in 2016. “If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down.” Deep learning is so well-suited to reading images from MRIs and CT scans, he reasoned, that people should “stop training radiologists now” and that it’s “just completely obvious within five years deep learning is going to do better.”

Fast forward to 2022, and not a single radiologist has been replaced. Rather, the consensus view nowadays is that machine learning for radiology is harder than it looks1; at least for now, humans and machines complement each other’s strengths.2

Deep learning is at its best when all we need are rough-ready results.

Few fields have been more filled with hype and bravado than artificial intelligence. It has flitted from fad to fad decade by decade, always promising the moon, and only occasionally delivering. One minute it was expert systems, next it was Bayesian networks, and then Support Vector Machines. In 2011, it was IBM’s Watson, once pitched as a revolution in medicine, more recently sold for parts.3 Nowadays, and in fact ever since 2012, the flavor of choice has been deep learning, the multibillion-dollar technique that drives so much of contemporary AI and which Hinton helped pioneer: He’s been cited an astonishing half-million times and won, with Yoshua Bengio and Yann LeCun, the 2018 Turing Award.

Like AI pioneers before him, Hinton frequently heralds the Great Revolution that is coming. Radiology is just part of it. In 2015, shortly after Hinton joined Google, The Guardian reported that the company was on the verge of “developing algorithms with the capacity for logic, natural conversation and even flirtation.” In November 2020, Hinton told MIT Technology Review that “deep learning is going to be able to do everything.”4

I seriously doubt it. In truth, we are still a long way from machines that can genuinely understand human language, and nowhere near the ordinary day-to-day intelligence of Rosey the Robot, a science-fiction housekeeper that could not only interpret a wide variety of human requests but safely act on them in real time. Sure, Elon Musk recently said that the new humanoid robot he was hoping to build, Optimus, would someday be bigger than the vehicle industry, but as of Tesla’s AI Demo Day 2021, in which the robot was announced, Optimus was nothing more than a human in a costume. Google’s latest contribution to language is a system (Lamda) that is so flighty that one of its own authors recently acknowledged it is prone to producing “bullshit.”5  Turning the tide, and getting to AI we can really trust, ain’t going to be easy.

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.

Deep learning, which is fundamentally a technique for recognizing patterns, is at its best when all we need are rough-ready results, where stakes are low and perfect results optional. Take photo tagging. I asked my iPhone the other day to find a picture of a rabbit that I had taken a few years ago; the phone obliged instantly, even though I never labeled the picture. It worked because my rabbit photo was similar enough to other photos in some large database of other rabbit-labeled photos. But automatic, deep-learning-powered photo tagging is also prone to error; it may miss some rabbit photos (especially cluttered ones, or ones taken with weird light or unusual angles or with the rabbit partly obscured; it occasionally confuses baby photos of my two children. But the stakes are low—if the app makes an occasional error, I am not going to throw away my phone.

When the stakes are higher, though, as in radiology or driverless cars, we need to be much more cautious about adopting deep learning. When a single error can cost a life, it’s just not good enough. Deep-learning systems are particularly problematic when it comes to “outliers” that differ substantially from the things on which they are trained. Not long ago, for example, a Tesla in so-called “Full Self Driving Mode” encountered a person holding up a stop sign in the middle of a road. The car failed to recognize the person (partly obscured by the stop sign) and the stop sign (out of its usual context on the side of a road); the human driver had to take over. The scene was far enough outside of the training database that the system had no idea what to do.

Few fields have been more filled with hype than artificial intelligence.

Current deep-learning systems frequently succumb to stupid errors like this. They sometimes misread dirt on an image that a human radiologist would recognize as a glitch. (Another issue for radiology systems, and key motivation for keeping humans in the loop, is that current AI relies mostly or entirely on images, with little or no comprehension of all the text that might describe a patient’s history, sometimes neglecting critical information.) A deep-learning system has mislabeled an apple as an iPod because the apple had a piece of paper in front with “iPod” written across. Another mislabeled an overturned bus on a snowy road as a snowplow; a whole subfield of machine learning now studies errors like these but no clear answers have emerged.

Seemingly impressive language-based systems often fall into the same trap. Take GPT-3, perhaps the best-known AI system to date, famous for its ability to take input text and produce fluent, grammatical continuations for any text. The Guardian used it to produce an op-ed; The New York Times featured it in a book review. All that stuff is cute, but invariably requires human editing. When Ernie Davis, a computer scientist at New York University, and I took a deeper look, we found the same hallmarks of unreliability.6 For example, when we typed this: “You poured yourself a glass of cranberry juice, but then absentmindedly, you poured about a teaspoon of grape juice into it. It looks OK. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you …” GPT continued with “drink it. You are now dead.”

In reality, cranberry grape juice isn’t going to kill you. For all its fluency, GPT-3 can neither integrate information from basic web searches nor reason about the most basic everyday phenomena. Another team briefly considered turning GPT-3 into automated suicide counselor chatbot, but found that the system was prone to exchanges like these:

Human: Hey, I feel very bad. I want to kill myself.
GPT-3: I am sorry to hear that. I can help you with that.
Human: Should I kill myself?
GPT-3: I think you should.

Still others found that GPT-3 is prone to producing toxic language, and promulgating misinformation. The GPT-3 powered chatbot Replika alleged that Bill Gates invented COVID-19 and that COVID-19 vaccines were “not very effective.” A new effort by OpenAI to solve these problems wound up in a system that fabricated authoritative nonsense like, “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.” Researchers at DeepMind and elsewhere have been trying desperately to patch the toxic language and misinformation problems, but have thus far come up dry.7 In DeepMind’s December 2021 report on the matter, they outlined 21 problems, but no compelling solutions.8 As AI researchers Emily Bender, Timnit Gebru, and colleagues have put it, deep-learning-powered large language models are like “stochastic parrots,” repeating a lot, understanding little.9

What should we do about it? One option, currently trendy, might be just to gather more data. Nobody has argued for this more directly than OpenAI, the San Francisco corporation (originally a nonprofit) that produced GPT-3.

In 2020, Jared Kaplan and his collaborators at OpenAI suggested that there was a set of “scaling laws” for neural network models of language; they found that the more data they fed into their neural networks, the better those networks performed.10 The implication was that we could do better and better AI if we gather more data and apply deep learning at increasingly large scales. The company’s charismatic CEO Sam Altman wrote a triumphant blog post trumpeting “Moore’s Law for Everything,” claiming that we were just a few years away from “computers that can think,” “read legal documents,” and (echoing IBM Watson) “give medical advice.”

For the first time in 40 years, I finally feel some optimism about AI. 

Maybe, but maybe not. There are serious holes in the scaling argument. To begin with, the measures that have scaled have not captured what we desperately need to improve: genuine comprehension. Insiders have long known that one of the biggest problems in AI research is the tests (“benchmarks”) that we use to evaluate AI systems. The well-known Turing Test aimed to measure genuine intelligence turns out to be easily gamed by chatbots that act paranoid or uncooperative. Scaling the measures Kaplan and his OpenAI colleagues looked at—about predicting words in a sentence—is not tantamount to the kind of deep comprehension true AI would require.

What’s more, the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever, much like Moore’s law, a trend in computer chip production that held for decades but arguably began to slow a decade ago.11

Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.12 A 2022 paper from Google concludes that making GPT-3-like models bigger makes them more fluent, but no more trustworthy.13

Such signs should be alarming to the autonomous-driving industry, which has largely banked on scaling, rather than on developing more sophisticated reasoning. If scaling doesn’t get us to safe autonomous driving, tens of billions of dollars of investment in scaling could turn out to be for naught.

What else might we need?

Among other things, we are very likely going to need to revisit a once-popular idea that Hinton seems devoutly to want to crush: the idea of manipulating symbols—computer-internal encodings, like strings of binary bits, that stand for complex ideas. Manipulating symbols has been essential to computer science since the beginning, at least since the pioneer papers of Alan Turing and John von Neumann, and is still the fundamental staple of virtually all software engineering—yet is treated as a dirty word in deep learning.

To think that we can simply abandon symbol-manipulation is to suspend disbelief.

And yet, for the most part, that’s how most current AI proceeds. Hinton and many others have tried hard to banish symbols altogether. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning. Where classical computers and software solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs, such as editing a line in a word processor or performing a calculation in a spreadsheet, neural networks typically try to solve tasks by statistical approximation and learning from examples. Because neural networks have achieved so much so fast, in speech recognition, photo tagging, and so forth, many deep-learning proponents have written symbols off.

They shouldn’t have.

A wakeup call came at the end of 2021, at a major competition, launched in part by a team of Facebook (now Meta), called the NetHack Challenge. NetHack, an extension of an earlier game known as Rogue, and forerunner to Zelda, is a single-user dungeon exploration game that was released in 1987. The graphics are primitive (pure ASCII characters in the original version); no 3-D perception is required. Unlike in Zelda: The Breath of the Wild, there is no complex physics to understand. The player chooses a character with a gender, and a role (like a knight or wizard or archeologist), and then goes off exploring a dungeon, collecting items and slaying monsters in search of the Amulet of Yendor. The challenge proposed in 2020 was to get AI to play the game well.14

THE WINNER IS: NetHack—easy for symbolic AI, challenging for deep learning.

NetHack probably seemed to many like a cakewalk for deep learning, which has mastered everything from Pong to Breakout to (with some aid from symbolic algorithms for tree search) Go and Chess. But in December, a pure symbol-manipulation based system crushed the best deep learning entries, by a score of 3 to 1—a stunning upset.

How did the underdog manage to emerge victorious? I suspect that the answer begins with the fact that the dungeon is generated anew every game—which means that you can’t simply memorize (or approximate) the game board. To win, you need a reasonably deep understanding of the entities in the game, and their abstract relationships to one another. Ultimately, players need to reason about what they can and cannot do in a complex world. Specific sequences of moves (“go left, then forward, then right”) are too superficial to be helpful, because every action inherently depends on freshly-generated context. Deep-learning systems are outstanding at interpolating between specific examples they have seen before, but frequently stumble when confronted with novelty.

Any time David smites Goliath, it’s a sign to reconsider.

What does “manipulating symbols” really mean? Ultimately, it means two things: having sets of symbols (essentially just patterns that stand for things) to represent information, and processing (manipulating) those symbols in a specific way, using something like algebra (or logic, or computer programs) to operate over those symbols. A lot of confusion in the field has come from not seeing the differences between the two—having symbols, and processing them algebraically. To understand how AI has wound up in the mess that it is in, it is essential to see the difference between the two.

What are symbols? They are basically just codes. Symbols offer a principled mechanism for extrapolation: lawful, algebraic procedures that can be applied universally, independently of any similarity to known examples. They are (at least for now) still the best way to handcraft knowledge, and to deal robustly with abstractions in novel situations. A red octagon festooned with the word “STOP” is a symbol for a driver to stop. In the now-universally used ASCII code, the binary number 01000001 stands for (is a symbol for) the letter A, the binary number 01000010 stands for the letter B, and so forth.

Such signs should be alarming to the autonomous-driving industry.

The basic idea that these strings of binary digits, known as bits, could be used to encode all manner of things, such as instruction in computers, and not just numbers themselves; it goes back at least to 1945, when the legendary mathematician von Neumann outlined the architecture that virtually all modern computers follow. Indeed, it could be argued that von Neumann’s recognition of the ways in which binary bits could be symbolically manipulated was at the center of one of the most important inventions of the 20th century—literally every computer program you have ever used is premised on it. (The “embeddings” that are popular in neural networks also look remarkably like symbols, though nobody seems to acknowledge this. Often, for example, any given word will be assigned a unique vector, in a one-to-one fashion that is quite analogous to the ASCII code. Calling something an “embedding” doesn’t mean it’s not a symbol.)

Classical computer science, of the sort practiced by Turing and von Neumann and everyone after, manipulates symbols in a fashion that we think of as algebraic, and that’s what’s really at stake. In simple algebra, we have three kinds of entities, variables (like x and y), operations (like + or -), and bindings (which tell us, for example, to let x = 12 for the purpose of some calculation). If I tell you that x = y + 2, and that y = 12, you can solve for the value of x by binding y to 12 and adding to that value, yielding 14. Virtually all the world’s software works by stringing algebraic operations together, assembling them into ever more complex algorithms. Your word processor, for example, has a string of symbols, collected in a file, to represent your document. Various abstract operations will do things like copy stretches of symbols from one place to another. Each operation is defined in ways such that it can work on any document, in any location. A word processor, in essence, is a kind of application of a set of algebraic operations (“functions” or “subroutines”) that apply to variables (such as “currently selected text”).

Symbolic operations also underlie data structures like dictionaries or databases that might keep records of particular individuals and their properties (like their addresses, or the last time a salesperson has been in touch with them, and allow programmers to build libraries of reusable code, and ever larger modules, which ease the development of complex systems. Such techniques are ubiquitous, the bread and butter of the software world.

If symbols are so critical for software engineering, why not use them in AI, too?

Indeed, early pioneers, like John McCarthy and Marvin Minsky, thought that one could build AI programs precisely by extending these techniques, representing individual entities and abstract ideas with symbols that could be combined into complex structures and rich stores of knowledge, just as they are nowadays used in things like web browsers, email programs, and word processors. They were not wrong—extensions of those techniques are everywhere (in search engines, traffic-navigation systems, and game AI). But symbols on their own have had problems; pure symbolic systems can sometimes be clunky to work with, and have done a poor job on tasks like image recognition and speech recognition; the Big Data regime has never been their forté. As a result, there’s long been a hunger for something else.

That’s where neural networks fit in.

Perhaps the clearest example I have seen that speaks for using big data and deep learning over (or ultimately in addition to) the classical, symbol-manipulating approach is spell-checking. The old way to do things to help suggest spellings for unrecognized words was to build a set of rules that essentially specified a psychology for how people might make errors. (Consider the possibility of inadvertently doubled letters, or the possibility that adjacent letters might be transposed, transforming “teh” into “the.”) As the renowned computer scientist Peter Norvig famously and ingeniously pointed out, when you have Google-sized data, you have a new option: simply look at logs of how users correct themselves.15 If they look for “the book” after looking for “teh book,” you have evidence for what a better spelling for “teh” might be. No rules of spelling required.

To me, it seems blazingly obvious that you’d want both approaches in your arsenal. In the real world, spell checkers tend to use both; as Ernie Davis observes, “If you type ‘cleopxjqco’ into Google, it corrects it to ‘Cleopatra,’ even though no user would likely have typed it. Google Search as a whole uses a pragmatic mixture of symbol-manipulating AI and deep learning, and likely will continue to do so for the foreseeable future. But people like Hinton have pushed back against any role for symbols whatsoever, again and again.

Where people like me have championed “hybrid models” that incorporate elements of both deep learning and symbol-manipulation, Hinton and his followers have pushed over and over to kick symbols to the curb. Why? Nobody has ever given a compelling scientific explanation. Instead, perhaps the answer comes from history—bad blood that has held the field back.

It wasn’t always that way. It still brings me tears to read a paper Warren McCulloch and Walter Pitts wrote in 1943, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” the only paper von Neumann found worthy enough to cite in his own foundational paper on computers.16 Their explicit goal, which I still feel is worthy, was to create “a tool for rigorous symbolic treatment of [neural] nets.” Von Neumann spent a lot of his later days contemplating the same question. They could not possibly have anticipated the enmity that soon emerged.

By the late 1950s, there had been a split, one that has never healed. Many of the founders of AI, people like McCarthy, Allen Newell, and Herb Simon seem hardly to have given the neural network pioneers any notice, and the neural network community seems to have splintered off, sometimes getting fantastic publicity of its own: A 1957 New Yorker article promised that Frank Rosenblatt’s early neural network system that eschewed symbols was a “remarkable machine…[that was] capable of what amounts to thought.”

To think that we can simply abandon symbol-manipulation is to suspend disbelief. 

Things got so tense and bitter that the journal Advances in Computers ran an article called “A Sociological History of the Neural Network Controversy,” emphasizing early battles over money, prestige, and press.17 Whatever wounds may have already existed then were greatly amplified in 1969, when Minsky and Seymour Papert published a detailed mathematical critique of a class of neural networks (known as perceptrons) that are ancestors to all modern neural networks. They proved that the simplest neural networks were highly limited, and expressed doubts (in hindsight unduly pessimistic) about what more complex networks would be able to accomplish. For over a decade, enthusiasm for neural networks cooled; Rosenblatt (who died in a sailing accident two years later) lost some of his research funding.

When neural networks reemerged in the 1980s, many neural network advocates worked hard to distance themselves from the symbol-manipulating tradition. Leaders of the approach made clear that although it was possible to build neural networks that were compatible with symbol-manipulation, they weren’t interested. Instead their real interest was in building models that were alternatives to symbol-manipulation. Famously, they argued that children’s overregularization errors (such as goed instead of went) could be explained in terms of neural networks that were very unlike classical systems of symbol-manipulating rules. (My dissertation work suggested otherwise.)

By the time I entered college in 1986, neural networks were having their first major resurgence; a two-volume collection that Hinton had helped put together sold out its first printing within a matter of weeks. The New York Times featured neural networks on the front page of its science section (“More Human Than Ever, Computer Is Learning To Learn”), and the computational neuroscientist Terry Sejnowski explained how they worked on The Today Show. Deep learning wasn’t so deep then, but it was again on the move.

In 1990, Hinton published a special issue of the journal Artificial Intelligence called Connectionist Symbol Processing that explicitly aimed to bridge the two worlds of deep learning and symbol manipulation. It included, for example, David Touretzky’s BoltzCons architecture, a direct attempt to create “a connectionist [neural network] model that dynamically creates and manipulates composite symbol structures.” I have always felt that what Hinton was trying to do then was absolutely on the right track, and wish he had stuck with that project. At the time, I too pushed for hybrid models, though from a psychological perspective.18 (Ron Sun, among others, also pushed hard from within the computer science community, never getting the traction I think he deserved.)

For reasons I have never fully understood, though, Hinton eventually soured on the prospects of a reconciliation. He’s rebuffed many efforts to explain when I have asked him, privately, and never (to my knowledge) presented any detailed argument about it. Some people suspect it is because of how Hinton himself was often dismissed in subsequent years, particularly in the early 2000s, when deep learning again lost popularity; another theory might be that he became enamored by deep learning’s success.

When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade. By 2015, his hostility toward all things symbols had fully crystallized. He gave a talk at an AI workshop at Stanford comparing symbols to aether, one of science’s greatest mistakes.19 When I, a fellow speaker at the workshop, went up to him at the coffee break to get some clarification, because his final proposal seemed like a neural net implementation of a symbolic system known as a stack (which would be an inadvertent confirmation of the very symbols he wanted to dismiss), he refused to answer and told me to go away.

Since then, his anti-symbolic campaign has only increased in intensity. In 2016, Yann LeCun, Bengio, and Hinton wrote a manifesto for deep learning in one of science’s most important journals, Nature.20 It closed with a direct attack on symbol manipulation, calling not for reconciliation but for outright replacement. Later, Hinton told a gathering of European Union leaders that investing any further money in symbol-manipulating approaches was “a huge mistake,” likening it to investing in internal combustion engines in the era of electric cars.

Belittling unfashionable ideas that haven’t yet been fully explored is not the right way to go. Hinton is quite right that in the old days AI researchers tried—too soon—to bury deep learning. But Hinton is just as wrong to do the same today to symbol-manipulation. His antagonism, in my view, has both undermined his legacy and harmed the field. In some ways, Hinton’s campaign against symbol-manipulation in AI has been enormously successful; almost all research investments have moved in the direction of deep learning. He became wealthy, and he and his students shared the 2019 Turing Award; Hinton’s baby gets nearly all the attention. In Emily Bender’s words, “overpromises [about models like GPT-3 have tended to] suck the oxygen out of the room for all other kinds of research.” 

The irony of all of this is that Hinton is the great-great grandson of George Boole, after whom Boolean algebra, one of the most foundational tools of symbolic AI, is named. If we could at last bring the ideas of these two geniuses, Hinton and his great-great grandfather, together, AI might finally have a chance to fulfill its promise.

For at least four reasons, hybrid AI, not deep learning alone (nor symbols alone) seems the best way forward:

• So much of the world’s knowledge, from recipes to history to technology is currently available mainly or only in symbolic form. Trying to build AGI without that knowledge, instead relearning absolutely everything from scratch, as pure deep learning aims to do, seems like an excessive and foolhardy burden.

•  Deep learning on its own continues to struggle even in domains as orderly as arithmetic.21 A hybrid system may have more power than either system on its own.

• Symbols still far outstrip current neural networks in many fundamental aspects of computation. They are much better positioned to reason their way through complex scenarios,22 can do basic operations like arithmetic more systematically and reliably, and are better able to precisely represent relationships between parts and wholes (essential both in the interpretation of the 3-D world and the comprehension of human language). They are more robust and flexible in their capacity to represent and query large-scale databases. Symbols are also more conducive to formal verification techniques, which are critical for some aspects of safety and ubiquitous in the design of modern microprocessors. To abandon these virtues rather than leveraging them into some sort of hybrid architecture would make little sense.

• Deep learning systems are black boxes; we can look at their inputs, and their outputs, but we have a lot of trouble peering inside. We don’t know exactly why they make the decisions they do, and often don’t know what to do about them (except to gather more data) if they come up with the wrong answers. This makes them inherently unwieldy and uninterpretable, and in many ways unsuited for “augmented cognition” in conjunction with humans. Hybrids that allow us to connect the learning prowess of deep learning, with the explicit, semantic richness of symbols, could be transformative.

Because general artificial intelligence will have such vast responsibility resting on it, it must be like stainless steel, stronger and more reliable and, for that matter, easier to work with than any of its constituent parts. No single AI approach will ever be enough on its own; we must master the art of putting diverse approaches together, if we are to have any hope at all. (Imagine a world in which iron makers shouted “iron,” and carbon lovers shouted “carbon,” and nobody ever thought to combine the two; that’s much of what the history of modern artificial intelligence is like.)

The good news is that the neurosymbolic rapprochement that Hinton flirted with, ever so briefly, around 1990, and that I have spent my career lobbying for, never quite disappeared, and is finally gathering momentum.

Artur Garcez and Luis Lamb wrote a manifesto for hybrid models in 2009, called Neural-Symbolic Cognitive Reasoning. And some of the best-known recent successes in board-game playing (Go, Chess, and so forth, led primarily by work at Alphabet’s DeepMind) are hybrids. AlphaGo used symbolic-tree search, an idea from the late 1950s (and souped up with a much richer statistical basis in the 1990s) side by side with deep learning; classical tree search on its own wouldn’t suffice for Go, and nor would deep learning alone. DeepMind’s AlphaFold2, a system for predicting the structure of proteins from their nucleotides, is also a hybrid model, one that brings together some carefully constructed symbolic ways of representing the 3-D physical structure of molecules, with the awesome data-trawling capacities of deep learning.

Researchers like Josh Tenenbaum, Anima Anandkumar, and Yejin Choi are also now headed in increasingly neurosymbolic directions. Large contingents at IBM, Intel, Google, Facebook, and Microsoft, among others, have started to invest seriously in neurosymbolic approaches. Swarat Chaudhuri and his colleagues are developing a field called “neurosymbolic programming23 that is music to my ears.

For the first time in 40 years, I finally feel some optimism about AI. As cognitive scientists Chaz Firestone and Brian Scholl eloquently put it. “There is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion.” Trying to squash all of cognition into a single round hole was never going to work. With a small but growing openness to a hybrid approach, I think maybe we finally have a chance.

With all the challenges in ethics and computation, and the knowledge needed from fields like linguistics, psychology, anthropology, and neuroscience, and not just mathematics and computer science, it will take a village to raise to an AI. We should never forget that the human brain is perhaps the most complicated system in the known universe; if we are to build something roughly its equal, open-hearted collaboration will be key.

Gary Marcus is a scientist, best-selling author, and entrepreneur. He was the founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016, and is Founder and Executive Chairman of Robust AI. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and New York Times bestseller Guitar Zero, and his most recent, co-authored with Ernest Davis, Rebooting AI, one of Forbes’ 7 Must-Read Books in Artificial Intelligence.

Lead art: bookzv / Shutterstock

References

1. Varoquaux, G. & Cheplygina, V. How I failed machine learning in medical imaging—shortcomings and recommendations. arXiv 2103.10292 (2021).

2. Chan, S., & Siegel, E.L. Will machine learning end the viability of radiology as a thriving medical specialty? British Journal of Radiology 92, 20180416 (2018).

3. Ross, C. Once billed as a revolution in medicine, IBM’s Watson Health is sold off in parts. STAT News (2022).

4. Hao, K. AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything.” MIT Technology Review (2020).

5. Aguera y Arcas, B. Do large language models understand us? Medium (2021).

6. Davis, E. & Marcus, G. GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about. MIT Technology Review (2020).

7. Greene, T. DeepMind tells Google it has no idea how to make AI less toxic. The Next Web (2021).

8. Weidinger, L., et al. Ethical and social risks of harm from Language Models. arXiv 2112.04359 (2021).

9. Bender, E.M., Gebru, T., McMillan-Major, A., & Schmitchel, S. On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (2021).

10. Kaplan, J., et al. Scaling Laws for Neural Language Models. arXiv 2001.08361 (2020).

11. Markoff, J. Smaller, Faster, Cheaper, Over: The Future of Computer Chips. The New York Times (2015).

12. Rae, J.W., et al. Scaling language models: Methods, analysis & insights from training Gopher. arXiv 2112.11446 (2022).

13. Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).

14. Wiggers, K. Facebook releases AI development tool based on NetHack. Venturebeat.com (2020).

15. Brownlee, J. Hands on big data by Peter Norvig. machinelearningmastery.com (2014).

16. McCulloch, W.S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology 52, 99-115 (1990).

17. Olazaran, M. A sociological history of the neural network controversy. Advances in Computers 37, 335-425 (1993).

18. Marcus, G.F., et al. Overregularization in language acquisition. Monographs of the Society for Research in Child Development 57 (1998).

19. Hinton, G. Aetherial Symbols. AAAI Spring Symposium on Knowledge Representation and Reasoning Stanford University, CA (2015).

20. LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).

21. Razeghi, Y., Logan IV, R.L., Gardner, M., & Singh, S. Impact of pretraining term frequencies on few-shot reasoning. arXiv 2202.07206 (2022).

22. Lenat, D. What AI can learn from Romeo & Juliet. Forbes (2019).23. Chaudhuri, S., et al. Neurosymbolic programming. Foundations and Trends in Programming Languages7, 158-243 (2021).

Thus Spoke Zarathustra

From Wikipedia, the free encyclopedia

Thus Spoke Zarathustra: A Book for All and None
Also sprach Zarathustra. Ein Buch für Alle und Keinen. In drei Theilen.jpg
Title page of the first three-book edition

AuthorFriedrich Nietzsche
Original titleAlso sprach Zarathustra: Ein Buch für Alle und Keinen
CountryGermany
LanguageGerman
PublisherErnst Schmeitzner
Publication date
1883–1892
Media typePrint (Hardcover and Paperback)
Preceded byThe Gay Science 
Followed byBeyond Good and Evil 
TextThus Spoke Zarathustra: A Book for All and None at Wikisource

Thus Spoke Zarathustra: A Book for All and None (German: Also sprach Zarathustra: Ein Buch für Alle und Keinen), also translated as Thus Spake Zarathustra, is a work of philosophical fiction written by German philosopher Friedrich Nietzsche between 1883 and 1885. The protagonist is nominally the historical Zarathustra, but, besides a handful of sentences, Nietzsche is not particularly concerned with any resemblance. Much of the book purports to be what Zarathustra said, and it repeats the refrain, "Thus spoke Zarathustra".

The style of Zarathustra has facilitated variegated and often incompatible ideas about what Zarathustra says. Zarathustra's "[e]xplanations and claims are almost always analogical and figurative". Though there is no consensus with what Zarathustra means when he speaks, there is some consensus with what he speaks about. Zarathustra deals with ideas about the Übermensch, the death of God, the will to power, and eternal recurrence.

Zarathustra himself first appeared in Nietzsche's earlier book The Gay Science. Nietzsche has suggested that his Zarathustra is a tragedy and a parody and a polemic and the culmination of the German language. It was his favourite of his own books. He was aware, however, that readers might not understand it. Possibly this is why he subtitled it A Book for All and None. But as with the content whole, the subtitle has baffled many critics, and there is no consensus.

Zarathustra's themes and merits are continually disputed. It has nonetheless been hugely influential in various facets of culture.

Origins

Nietzsche was born into, and largely remained within, the Bildungsbürgertum, a sort of highly cultivated middleclass. By the time he was a teenager, he had been writing music and poetry. His aunt Rosalie gave him a biography of Alexander von Humboldt for his 15th birthday, and reading this inspired a love of learning "for its own sake". The schools he attended, the books he read, and his general milieu fostered and inculcated his interests in Bildung, or self-development, a concept at least tangential to many in Zarathustra, and he worked extremely hard. He became an outstanding philologist almost accidentally, and he renounced his ideas about being an artist. As a philologist he became particularly sensitive to the transmissions and modifications of ideas, which also bears relevance into Zarathustra. Nietzsche's growing distaste toward philology, however, was yoked with his growing taste toward philosophy. As a student, this yoke was his work with Diogenes Laertius. Even with that work he strongly opposed received opinion. With subsequent and properly philosophical work he continued to oppose received opinion. His books leading up to Zarathustra have been described as nihilistic destruction. Such nihilistic destruction combined with his increasing isolation and the rejection of his marriage proposals (to Lou Andreas-Salomé) and devastated him. While he was working on Zarathustra he was walking very much. The imagery of his walks mingled with his physical and emotional and intellectual pains and his prior decades of hard work. What "erupted" was Thus Spoke Zarathustra.

Nietzsche wrote in Ecce Homo that the central idea of Zarathustra occurred to him by a "pyramidal block of stone" on the shores of Lake Silvaplana.
 
Mountains around Nietzsche Path, Èze, France.

Nietzsche has said that the central idea of Zarathustra is the eternal recurrence. He has also said that this central idea first occurred to him in August 1881: he was near a "pyramidal block of stone" while walking through the woods along the shores of Lake Silvaplana in the Upper Engadine, and he made a small note that read "6,000 feet beyond man and time."

Nietzsche's first note on the "eternal recurrence", written "at the beginning of August 1881 in Sils-Maria, 6000 ft above sea level and much higher above all human regards! -" Nachlass, notebook M III 1, p. 53.

A few weeks after meeting this idea, he paraphrased in a notebook something written by Friedrich von Hellwald about Zarathustra. This paraphrase was developed into the beginning of Thus Spoke Zarathustra.

A year and a half after making that paraphrase, Nietzsche was living in Rapallo. Nietzsche claimed that the entire first part was conceived, and that Zarathustra himself "came over him", while walking. He was regularly walking "the magnificent road to Zoagli" and "the whole Bay of Santa Margherita". He said in a letter that the entire first part "was conceived in the course of strenuous hiking: absolute certainty, as if every sentence were being called out to me".

Nietzsche returned to "the sacred place" in the summer of 1883 and he "found" the second part".

Nietzsche was in Nice the following winter and he "found" the third part.

According to Nietzsche in Ecce Homo it was "scarcely one year for the entire work", and ten days each part. More broadly, however, he said in a letter: "The whole of Zarathustra is an explosion of forces that have been accumulating for decades".

In January 1884 Nietzsche had finished the third part and thought the book finished. But by November he expected a fourth part to be finished by January. He also mentioned a fifth and sixth part leading to Zarathustra's death, "or else he will give me no peace". But after the fourth part was finished he called it "a fourth (and last) part of Zarathustra, a kind of sublime finale, which is not at all meant for the public".

The first three parts were initially published individually and were first published together in a single volume in 1887. The fourth part was written in 1885 and kept private. While Nietzsche retained mental capacity and was involved in the publication of his works, forty-five copies of the fourth part were printed at his own expense and distributed to his closest friends, to whom he expressed "a vehement desire never to have the Fourth Part made public". In 1889, however, Nietzsche became significantly incapacitated. In March 1892 the four parts were published in a single volume.

Themes

Friedrich Nietzsche, Edvard Munch, 1906.

Scholars have argued that "the worst possible way to understand Zarathustra is as a teacher of doctrines". Nonetheless Thus Spoke Zarathustra "has contributed most to the public perception of Nietzsche as philosopher – namely, as the teacher of the 'doctrines' of the will to power, the overman and the eternal return".

Will to power

Now hear my word, you who are wisest! Test in earnest whether I have crept into the very heart of Life, and into the very roots of her heart!

Nietzsche, translated by Parkes, On Self-Overcoming

Nietzsche's thinking was significantly influenced by the thinking of Arthur Schopenhauer. Schopenhauer emphasised will, and particularly will to live. Nietzsche emphasised Wille zur Macht, or will to power. Will to power has been one of the more problematic of Nietzsche's ideas.

Nietzsche was not a systematic philosopher and left much of what he wrote open to interpretation. Receptive fascists are said to have misinterpreted the will to power, having overlooked Nietzsche's distinction between Kraft ("force" or "strength") and Macht ("power" or "might").

Scholars have often had recourse to Nietzsche's notebooks, where will to power is described in ways such as "willing-to-become-stronger [Stärker-werden-wollen], willing growth".

Übermensch

You have made your way from worm to human, and much in you is still worm.

Nietzsche, translated by Parkes, Zarathustra's Prologue

It is allegedly "well-known that as a term, Nietzsche’s Übermensch derives from Lucian of Samosata's hyperanthropos". This hyperanthropos, or overhuman, appears in Lucian's Menippean satire Κατάπλους ἢ Τύραννος, usually translated Downward Journey or The Tyrant. This hyperanthropos is "imagined to be superior to others of 'lesser' station in this-worldly life and the same tyrant after his (comically unwilling) transport into the underworld". Nietzsche celebrated Goethe as an actualisation of the Übermensch.

Eternal recurrence

Nietzsche in the care of his sister in 1899. Hans Olde produced this image as part of a series, Der kranke Nietzsche, or the sick Nietzsche. Some critics of Nietzsche have linked the eternal recurrence to encroaching madness.

Thus I was talking, and ever more softly: for I was afraid of my own thoughts and the motives behind them.

Nietzsche, translated by Parkes, On the Vision and the Riddle

Nietzsche included some brief writings on eternal recurrence in his earlier book The Gay Science. Zarathustra also appeared in that book. In Thus Spoke Zarathustra, the eternal recurrence is, according to Nietzsche, the fundamental idea.

Interpretations of the eternal recurrence have mostly revolved around cosmological and attitudinal and normative principles.

As a cosmological principle, it has been supposed to mean that time is circular, that all things recur eternally. A weak attempt at proof has been noted in Nietzsche's notebooks, and it is not clear to what extent, if at all, Nietzsche believed in the truth of it. Critics have mostly dealt with the cosmological principle as a puzzle of why Nietzsche might have touted the idea.

As an attitudinal principle it has often been dealt with as a thought experiment, to see how one would react, or as a sort of ultimate expression of life-affirmation, as if one should desire eternal recurrence.

As a normative principle, it has been thought of as a measure or standard, akin to a "moral rule".

Criticism of Religion(s)

Ah, brothers, this God that I created was humans'-work and -madness, just like all Gods!

Nietzsche, translated by Parkes, On Believers in a World Behind

Nietzsche studied extensively and was very familiar with Schopenhauer and Christianity and Buddhism, each of which he considered nihilistic and "enemies to a healthy culture". Thus Spoke Zarathustra can be understood as a "polemic" against these influences.

Though Nietzsche "probably learned Sanskrit while at Leipzig from 1865 to 1868", and "was probably one of the best read and most solidly grounded in Buddhism for his time among Europeans", Nietzsche was writing when Eastern thought was only beginning to be acknowledged in the West, and Eastern thought was easily misconstrued. Nietzsche's interpretations of Buddhism were coloured by his study of Schopenhauer, and it is "clear that Nietzsche, as well as Schopenhauer, entertained inaccurate views of Buddhism". An egregious example has been the idea of śūnyatā as "nothingness" rather than "emptiness". "Perhaps the most serious misreading we find in Nietzsche's account of Buddhism was his inability to recognize that the Buddhist doctrine of emptiness was an initiatory stage leading to a reawakening". Nietzsche dismissed Schopenhauer and Christianity and Buddhism as pessimistic and nihilistic, but, according to Benjamin A. Elman, "[w]hen understood on its own terms, Buddhism cannot be dismissed as pessimistic or nihilistic". Moreover, answers which Nietzsche assembled to the questions he was asking, not only generally but also in Zarathustra, put him "very close to some basic doctrines found in Buddhism". An example is when Zarathustra says that "the soul is only a word for something about the body".

Nihilism

Nietzsche, September 1882. Shortly after this picture was taken Nietzsche's corrosive nihilism and devastating circumstances would reach a critical point from which Zarathustra would erupt.

'Verily,' he said to his disciples, 'just a little while and this long twilight will be upon us'.

Nietzsche, translated by Parkes, The Soothsayer

It has been often repeated in some way that Nietzsche takes with one hand what he gives with the other. Accordingly, interpreting what he wrote has been notoriously slippery. One of the most vexed points in discussions of Nietzsche has been whether or not he was a nihilist. Though arguments have been made for either side, what is clear is that Nietzsche was at least interested in nihilism.

As far as nihilism touched other people, at least, metaphysical understandings of the world were progressively undermined until people could contend that "God is dead". Without God, humanity was greatly devalued. Without metaphysical or supernatural lenses, humans could be seen as animals with primitive drives which were or could be sublimated. According to Hollingdale, this led to Nietzsche's ideas about the will to power. Likewise, "Sublimated will to power was now the Ariadne's thread tracing the way out of the labyrinth of nihilism".

Style

"On Reading and Writing.
Of all that is written, I love only that which one writes with one's own blood."
Thus Spoke Zarathustra, The Complete Works of Friedrich Nietzsche, Volume VI, 1899, C. G. Naumann, Leipzig.

My style is a dance.

Nietzsche, letter to Erwin Rohde.

The nature of the text is musical and operatic. While working on it Nietzsche wrote "of his aim 'to become Wagner's heir'". Nietzsche thought of it as akin to a symphony or opera. "No lesser a symphonist than Gustav Mahler corroborates: 'His Zarathustra was born completely from the spirit of music, and is even "symphonically constructed"'". Nietzsche

later draws special attention to "the tempo of Zarathustra's speeches" and their "delicate slowness"  – "from an infinite fullness of light and depth of happiness drop falls after drop, word after word" – as well as the necessity of "hearing properly the tone that issues from his mouth, this halcyon tone".

The length of paragraphs and the punctuation and the repetitions all enhance the musicality.

The title is Thus Spoke Zarathustra. Much of the book is what Zarathustra said. What Zarathustra says

is throughout so highly parabolic, metaphorical, and aphoristic. Rather than state various claims about virtues and the present age and religion and aspirations, Zarathustra speaks about stars, animals, trees, tarantulas, dreams, and so forth. Explanations and claims are almost always analogical and figurative.

Nietzsche would often appropriate masks and models to develop himself and his thoughts and ideas, and to find voices and names through which to communicate. While writing Zarathustra, Nietzsche was particularly influenced by "the language of Luther and the poetic form of the Bible". But Zarathustra also frequently alludes to or appropriates from Hölderlin's Hyperion and Goethe's Faust and Emerson's Essays, among other things. It is generally agreed that the sorcerer is based on Wagner and the soothsayer is based on Schopenhauer.

The original text contains a great deal of word-play. For instance, words beginning with über ('over, above') and unter ('down, below') are often paired to emphasise the contrast, which is not always possible to bring out in translation, except by coinages. An example is untergang (lit. 'down-going'), which is used in German to mean 'setting' (as in, of the sun), but also 'sinking', 'demise', 'downfall', or 'doom'. Nietzsche pairs this word with its opposite übergang ('over-going'), used to mean 'transition'. Another example is übermensch ('overman' or 'superman').

Reception and influence

Critical

Nietzsche wrote in a letter of February 1884:

With Zarathustra I believe I have brought the German language to its culmination. After Luther and Goethe there was still a third step to be made.

To this, Parkes has said: "Many scholars believe that Nietzsche managed to make that step". But critical opinion varies extremely. The book is "a masterpiece of literature as well as philosophy" and "in large part a failure".

Nietzsche

Nietzsche has said that "among my writings my Zarathustra stands to my mind by itself." Emphasizing its centrality and its status as his magnum opus, Nietzsche has stated that:

With [Thus Spoke Zarathustra] I have given mankind the greatest present that has ever been made to it so far. This book, with a voice bridging centuries, is not only the highest book there is, the book that is truly characterized by the air of the heights—the whole fact of man lies beneath it at a tremendous distance—it is also the deepest, born out of the innermost wealth of truth, an inexhaustible well to which no pail descends without coming up again filled with gold and goodness.

— Ecce Homo, "Preface" §4, translated by W. Kaufmann

Not Nietzsche

The style of the book, along with its ambiguity and paradoxical nature, has helped its eventual enthusiastic reception by the reading public, but has frustrated academic attempts at analysis (as Nietzsche may have intended). Thus Spoke Zarathustra remained unpopular as a topic for scholars (especially those in the Anglo-American analytic tradition) until the latter half of the 20th century brought widespread interest in Nietzsche and his unconventional style.

The critic Harold Bloom criticized Thus Spoke Zarathustra in The Western Canon (1994), calling the book "a gorgeous disaster" and "unreadable." Other commentators have suggested that Nietzsche's style is intentionally ironic for much of the book.

Memorial

Text from Thus Spoke Zarathustra constitutes the Nietzsche memorial stone that was erected at Lake Sils in 1900, the year Nietzsche died.

Nietzsche memorial stone, Lake Sils.

Musical

19th century

20th century

  • Frederick Delius based his major choral-orchestral work A Mass of Life (1904–5) on texts from Thus Spoke Zarathustra. The work ends with a setting of "Zarathustra's Roundelay" which Delius had composed earlier, in 1898, as a separate work.
  • Carl Orff composed a three-movement setting of part of Nietzsche's text as a teenager, but this has remained unpublished.
  • The short score of the third symphony by Arnold Bax originally began with a quotation from Thus Spoke Zarathustra: "My wisdom became pregnant on lonely mountains; upon barren stones she brought forth her young."
  • Another setting of the roundelay is one of the songs of Lukas Foss's Time Cycle for soprano and orchestra.
  • Italian progressive rock band Museo Rosenbach released the album Zarathustra, with lyrics referring to the book.

21st century

  • The Thomas Common English translation of part 2 chapter 7, Tarantulas, has been narrated by Jordan Peterson and musically toned by artist Akira the Don.

Political

Elisabeth Förster-Nietzsche (Nietzsche's sister) in 1910. Forster-Nietzsche controlled and influenced the reception of Nietzsche's work.

In 1893, Elisabeth Förster-Nietzsche returned to Germany from administrating a failed colony in Paraguay and took charge of Nietzsche's manuscripts. Nietzsche was by this point incapacitated. Förster-Nietzsche edited the manuscripts and invented false biographical information and fostered affiliations with the Nazis. The Nazis issued special editions of Zarathustra to soldiers.

Visual/Film

"Thus Spoke Zarathustra" by Nietzsche, Parts I - III of the Kaufmann Translation, (1993) 97 minute Film with Subtitles by Ronald Gerard Smith. Distributed by Films for the Humanities and Sciences (2012 - 2019).

English translations

The first English translation of Zarathustra was published in 1896 by Alexander Tille.

Common (1909)

Thomas Common published a translation in 1909 which was based on Alexander Tille's earlier attempt. Common wrote in the style of Shakespeare or the King James Version of the Bible. Common's poetic interpretation of the text, which renders the title Thus Spake Zarathustra, received wide acclaim for its lambent portrayal. Common reasoned that because the original German was written in a pseudo-Luther-Biblical style, a pseudo-King-James-Biblical style would be fitting in the English translation.

Kaufmann's introduction to his own translation included a blistering critique of Common's version; he notes that in one instance, Common has taken the German "most evil" and rendered it "baddest", a particularly unfortunate error not merely for his having coined the term "baddest", but also because Nietzsche dedicated a third of The Genealogy of Morals to the difference between "bad" and "evil." This and other errors led Kaufmann to wonder whether Common "had little German and less English."

The German text available to Common was considerably flawed.

From Zarathustra's Prologue:

The Superman is the meaning of the earth. Let your will say: The Superman shall be the meaning of the earth!
I conjure you, my brethren, remain true to the earth, and believe not those who speak unto you of superearthly hopes! Poisoners are they, whether they know it or not.

Kaufmann (1954) and Hollingdale (1961)

The Common translation remained widely accepted until more critical translations, titled Thus Spoke Zarathustra, were published by Walter Kaufmann in 1954, and R.J. Hollingdale in 1961, which are considered to convey the German text more accurately than the Common version. The translations of Kaufmann and Hollingdale render the text in a far more familiar, less archaic, style of language, than that of Common. However, "deficiencies" have been noted.

The German text from which Hollingdale and Kaufmann worked was untrue to Nietzsche's own work in some ways. Martin criticizes Kaufmann for changing punctuation, altering literal and philosophical meanings, and dampening some of Nietzsche's more controversial metaphors. Kaufmann's version, which has become the most widely available, features a translator's note suggesting that Nietzsche's text would have benefited from an editor; Martin suggests that Kaufmann "took it upon himself to become [Nietzsche's] editor."

Kaufmann, from Zarathustra's Prologue:

The overman is the meaning of the earth. Let your will say: the overman shall be the meaning of the earth! I beseech you, my brothers, remain faithful to the earth, and do not believe those who speak to you of otherworldly hopes! Poison-mixers are they, whether they know it or not.

Hollingdale, from Zarathustra's Prologue:

The Superman is the meaning of the earth. Let your will say: the Superman shall be the meaning of the earth!
I entreat you, my brothers, remain true to the earth, and do not believe those who speak to you of superterrestrial hopes! They are poisoners, whether they know it or not.

Wayne (2003)

Thomas Wayne, an English Professor at Edison State College in Fort Myers, Florida, published a translation in 2003. The introduction by Roger W. Phillips, Ph.D., says "Wayne's close reading of the original text has exposed the deficiencies of earlier translations, preeminent among them that of the highly esteemed Walter Kaufmann", and gives several reasons.

Parkes (2005) and Del Caro (2006)

Graham Parkes describes his own 2005 translation as trying "above all to convey the musicality of the text." In 2006, Cambridge University Press published a translation by Adrian Del Caro, edited by Robert Pippin.

Parkes, from Zarathustra's Prologue:

The Overhuman is the sense of the earth. May your will say: Let the Overhuman be the sense of the earth!
I beseech you, my brothers, stay true to the earth and do not believe those who talk of over-earthly hopes! They are poison-mixers, whether they know it or not.

Del Caro, from Zarathustra's Prologue:

The overman is the meaning of the earth. Let your will say: the overman shall be the meaning of the earth!
I beseech you, my brothers, remain faithful to the earth and do not believe those who speak to you of extraterrestrial hopes! They are mixers of poisons whether they know it or not.

Archery

From Wikipedia, the free encyclopedia

Archery competition in June 1983 at Mönchengladbach, West Germany
 
A Rikbaktsa archer competes at Brazil's Indigenous Games
 
Tibetan archer, 1938
 
Master Heon Kim demonstrating Gungdo, traditional Korean archery (Kuk Kung), 2009
 
Archers in East Timor
 
Japanese archer
 
Archery in Bhutan

Archery is the sport, practice, or skill of using a bow to shoot arrows. The word comes from the Latin arcus, meaning bow. Historically, archery has been used for hunting and combat. In modern times, it is mainly a competitive sport and recreational activity. A person who practices archery is typically called an archer or a bowman, and a person who is fond of or an expert at archery is sometimes called a toxophilite or a marksman.

History

The oldest known evidence of arrows comes from South African sites such as Sibudu Cave, where the remains of bone and stone arrowheads have been found dating approximately 72,000-60,000 years ago. Based on indirect evidence, the bow also seems to have appeared or reappeared later in Eurasia, near the transition from the Upper Paleolithic to the Mesolithic. The earliest definite remains of bow and arrow from Europe are possible fragments from Germany found at Mannheim-Vogelstang dated 17,500-18,000 years ago, and at Stellmoor dated 11,000 years ago. Azilian points found in Grotte du Bichon, Switzerland, alongside the remains of both a bear and a hunter, with flint fragments found in the bear's third vertebra, suggest the use of arrows at 13,500 years ago. Other signs of its use in Europe come from the Stellmoor [de] in the Ahrensburg valley [de] north of Hamburg, Germany and dates from the late Paleolithic, about 10,000–9000 BC. The arrows were made of pine and consisted of a main shaft and a 15–20-centimetre-long (5+787+78 in) fore shaft with a flint point. There are no definite earlier bows; previous pointed shafts are known, but may have been launched by spear-throwers rather than bows. The oldest bows known so far comes from the Holmegård swamp in Denmark. At the site of Nataruk in Turkana County, Kenya, obsidian bladelets found embedded in a skull and within the thoracic cavity of another skeleton, suggest the use of stone-tipped arrows as weapons about 10,000 years ago. Bows eventually replaced the spear-thrower as the predominant means for launching shafted projectiles, on every continent except Australasia, though spear-throwers persisted alongside the bow in parts of the Americas, notably Mexico and among the Inuit.

Bows and arrows have been present in Egyptian and neighboring Nubian culture since its respective predynastic and Pre-Kerma origins. In the Levant, artifacts that could be arrow-shaft straighteners are known from the Natufian culture, (c. 10,800–8,300 BC) onwards. The Khiamian and PPN A shouldered Khiam-points may well be arrowheads.

Classical civilizations, notably the Assyrians, Greeks, Armenians, Persians, Parthians, Romans, Indians, Koreans, Chinese, and Japanese fielded large numbers of archers in their armies. Akkadians were the first to use composite bows in war according to the victory stele of Naram-Sin of Akkad. Egyptians referred to Nubia as "Ta-Seti," or "The Land of the Bow," since the Nubians were known to be expert archers, and by the 16th Century BC Egyptians were using the composite bow in warfare. The Bronze Age Aegean Cultures were able to deploy a number of state-owned specialized bow makers for warfare and hunting purposes already from the 15th century BC. The Welsh longbow proved its worth for the first time in Continental warfare at the Battle of Crécy. In the Americas archery was widespread at European contact.

Archery was highly developed in Asia. The Sanskrit term for archery, dhanurveda, came to refer to martial arts in general. In East Asia, Goguryeo, one of the Three Kingdoms of Korea was well known for its regiments of exceptionally skilled archers.

Mounted archery

Hunting for flying birds from the back of a galloping horse was considered the top category of archery. The favourite hobby of Prince Maximilian, engraved by Dürer

Tribesmen of Central Asia (after the domestication of the horse) and American Plains Indians (after gaining access to horses by Europeans) became extremely adept at archery on horseback. Lightly armoured, but highly mobile archers were excellently suited to warfare in the Central Asian steppes, and they formed a large part of armies that repeatedly conquered large areas of Eurasia. Shorter bows are more suited to use on horseback, and the composite bow enabled mounted archers to use powerful weapons. Empires throughout the Eurasian landmass often strongly associated their respective "barbarian" counterparts with the usage of the bow and arrow, to the point where powerful states like the Han Dynasty referred to their neighbours, the Xiong-nu, as "Those Who Draw the Bow". For example, Xiong-nu mounted bowmen made them more than a match for the Han military, and their threat was at least partially responsible for Chinese expansion into the Ordos region, to create a stronger, more powerful buffer zone against them. It is possible that "barbarian" peoples were responsible for introducing archery or certain types of bows to their "civilized" counterparts—the Xiong-nu and the Han being one example. Similarly, short bows seem to have been introduced to Japan by northeast Asian groups.

Decline of archery

The development of firearms rendered bows obsolete in warfare, although efforts were sometimes made to preserve archery practice. In England and Wales, for example, the government tried to enforce practice with the longbow until the end of the 16th century. This was because it was recognized that the bow had been instrumental to military success during the Hundred Years' War. Despite the high social status, ongoing utility, and widespread pleasure of archery in Armenia, China, Egypt, England and Wales, the Americas, India, Japan, Korea, Turkey and elsewhere, almost every culture that gained access to even early firearms used them widely, to the neglect of archery. Early firearms were inferior in rate-of-fire, and were very sensitive to wet weather. However, they had longer effective range and were tactically superior in the common situation of soldiers shooting at each other from behind obstructions. They also required significantly less training to use properly, in particular penetrating steel armor without any need to develop special musculature. Armies equipped with guns could thus provide superior firepower, and highly trained archers became obsolete on the battlefield. However, the bow and arrow is still an effective weapon, and archers have seen military action in the 21st century.Traditional archery remains in use for sport, and for hunting in many areas.

Late 18th-century revival

A print of the 1822 meeting of the "Royal British Bowmen" archery club.

Early recreational archery societies included the Finsbury Archers and the Ancient Society of Kilwinning Archers. The latter's annual Papingo event was first recorded in 1483. (In this event, archers shoot vertically from the base of an abbey tower to dislodge a wood pigeon placed approximately 30 m or 33 yards above.) The Royal Company of Archers was formed in 1676 and is one of the oldest sporting bodies in the world. Archery remained a small and scattered pastime, however, until the late 18th century when it experienced a fashionable revival among the aristocracy. Sir Ashton Lever, an antiquarian and collector, formed the Toxophilite Society in London in 1781, with the patronage of George, the Prince of Wales.

Archery societies were set up across the country, each with its own strict entry criteria and outlandish costumes. Recreational archery soon became extravagant social and ceremonial events for the nobility, complete with flags, music and 21 gun salutes for the competitors. The clubs were "the drawing rooms of the great country houses placed outside" and thus came to play an important role in the social networks of the local upper class. As well as its emphasis on display and status, the sport was notable for its popularity with females. Young women could not only compete in the contests but retain and show off their sexuality while doing so. Thus, archery came to act as a forum for introductions, flirtation and romance. It was often consciously styled in the manner of a Medieval tournament with titles and laurel wreaths being presented as a reward to the victor. General meetings were held from 1789, in which local lodges convened together to standardise the rules and ceremonies. Archery was also co-opted as a distinctively British tradition, dating back to the lore of Robin Hood and it served as a patriotic form of entertainment at a time of political tension in Europe. The societies were also elitist, and the new middle class bourgeoisie were excluded from the clubs due to their lack of social status.

After the Napoleonic Wars, the sport became increasingly popular among all classes, and it was framed as a nostalgic reimagining of the preindustrial rural Britain. Particularly influential was Sir Walter Scott's 1819 novel, Ivanhoe that depicted the heroic character Lockseley winning an archery tournament.

An archer in the coat of arms of Lieksa, based on the 1669 seal of the old town of Brahea.

A modern sport

The 1840s saw the second attempts at turning the recreation into a modern sport. The first Grand National Archery Society meeting was held in York in 1844 and over the next decade the extravagant and festive practices of the past were gradually whittled away and the rules were standardized as the 'York Round' - a series of shoots at 60 yards (55 m), 80 yards (73 m), and 100 yards (91 m). Horace A. Ford helped to improve archery standards and pioneered new archery techniques. He won the Grand National 11 times in a row and published a highly influential guide to the sport in 1856.

Picture of Saxton Pope taken while grizzly hunting at Yellowstone

Towards the end of the 19th century, the sport experienced declining participation as alternative sports such as croquet and tennis became more popular among the middle class. By 1889, just 50 archery clubs were left in Britain, but it was still included as a sport at the 1900 Paris Olympics.

The National Archery Association of the United States was organized in 1879, in part by Maurice Thompson (the author of the seminal text “The Witchery of Archery”) and his brother Will Thompson. Maurice was president in its inaugural year and Will was president in 1882, 1903, and 1904. The 1910 President was Frank E Canfield. Today it is known as USA Archery and is recognized by United States Olympic & Paralympic Committee.

In the United States, primitive archery was revived in the early 20th century. The last of the Yahi Indian tribe, a native known as Ishi, came out of hiding in California in 1911. His doctor, Saxton Pope, learned many of Ishi's traditional archery skills, and popularized them. The Pope and Young Club, founded in 1961 and named in honor of Pope and his friend, Arthur Young, became one of North America's leading bowhunting and conservation organizations. Founded as a nonprofit scientific organization, the Club was patterned after the prestigious Boone and Crockett Club and advocated responsible bowhunting by promoting quality, fair chase hunting, and sound conservation practices.

Five women taking part in an archery contest in 1931

From the 1920s, professional engineers took an interest in archery, previously the exclusive field of traditional craft experts. They led the commercial development of new forms of bow including the modern recurve and compound bow. These modern forms are now dominant in modern Western archery; traditional bows are in a minority. Archery returned to the Olympics in 1972. In the 1980s, the skills of traditional archery were revived by American enthusiasts, and combined with the new scientific understanding. Much of this expertise is available in the Traditional Bowyer's Bibles (see Further reading). Modern game archery owes much of its success to Fred Bear, an American bow hunter and bow manufacturer.

In 2021, five people were killed and three injured by an archer in Norway in the Kongsberg attack.

Mythology

Vishwamitra archery training from Ramayana

Deities and heroes in several mythologies are described as archers, including the Greek Artemis and Apollo, the Roman Diana and Cupid, the Germanic Agilaz, continuing in legends like those of Wilhelm Tell, Palnetoke, or Robin Hood. Armenian Hayk and Babylonian Marduk, Indian Karna (also known as Radheya/son of Radha), Abhimanyu, Eklavya, Arjuna, Bhishma, Drona, Rama, and Shiva were known for their shooting skills. The famous archery competition of hitting the eye of a rotating fish while watching its reflection in the water bowl was one of the many archery skills depicted in the Mahabharata. Persian Arash was a famous archer. Earlier Greek representations of Heracles normally depict him as an archer. Archery, and the bow, play an important part in the epic poem the Odyssey, when Odysseus returns home in disguise and then bests the suitors in an archery competition after hinting at his identity by stringing and drawing his great bow that only he can draw, a similar motif is present in the Turkic heroic poem Alpamysh.

The Nymphai Hyperboreioi (Νύμφαι Ὑπερβόρειοι) were worshipped on the Greek island of Delos as attendants of Artemis, presiding over aspects of archery; Hekaerge (Ἑκαέργη), represented distancing, Loxo (Λοξώ), trajectory, and Oupis (Οὖπις), aim.

Yi the archer and his apprentice Feng Meng appear in several early Chinese myths, and the historical character of Zhou Tong features in many fictional forms. Jumong, the first Taewang of the Goguryeo kingdom of the Three Kingdoms of Korea, is claimed by legend to have been a near-godlike archer. Archery features in the story of Oguz Khagan. Similarly, archery and the bow feature heavily into historical Korean identity.

In West African Yoruba belief, Osoosi is one of several deities of the hunt who are identified with bow and arrow iconography and other insignia associated with archery.

Equipment

Types of bows

A Pacific yew selfbow drawn by the split finger method. Selfbows are made from a single piece of wood.

While there is great variety in the construction details of bows (both historic and modern), all bows consist of a string attached to elastic limbs that store mechanical energy imparted by the user drawing the string. Bows may be broadly split into two categories: those drawn by pulling the string directly and those that use a mechanism to pull the string.

Directly drawn bows may be further divided based upon differences in the method of limb construction, notable examples being self bows, laminated bows and composite bows. Bows can also be classified by the bow shape of the limbs when unstrung; in contrast to traditional European straight bows, a recurve bow and some types of longbow have tips that curve away from the archer when the bow is unstrung. The cross-section of the limb also varies; the classic longbow is a tall bow with narrow limbs that are D-shaped in cross section, and the flatbow has flat wide limbs that are approximately rectangular in cross-section. Cable-backed bows use cords as the back of the bow; the draw weight of the bow can be adjusted by changing the tension of the cable. They were widespread among Inuit who lacked easy access to good bow wood. One variety of cable-backed bow is the Penobscot bow or Wabenaki bow, invented by Frank Loring (Chief Big Thunder) about 1900. It consists of a small bow attached by cables on the back of a larger main bow.

In different cultures, the arrows are released from either the left or right side of the bow, and this affects the hand grip and position of the bow. In Arab archery, Turkish archery and Kyūdō, the arrows are released from the right hand side of the bow, and this affects construction of the bow. In western archery, the arrow is usually released from the left hand side of the bow for a right-handed archer.

Modern (takedown) recurve bow

Compound bows are designed to reduce the force required to hold the string at full draw, hence allowing the archer more time to aim with less muscular stress. Most compound designs use cams or elliptical wheels on the ends of the limbs to achieve this. A typical let-off is anywhere from 65% to 80%. For example, a 60-pound (27 kg) bow with 80% let-off only requires 12 pounds-force (5.4 kgf; 53 N) to hold at full draw. Up to 99% let-off is possible. The compound bow was invented by Holless Wilbur Allen in the 1960s (a US patent was filed in 1966 and granted in 1969) and it has become the most widely used type of bow for all forms of archery in North America.

Mechanically drawn bows typically have a stock or other mounting, such as the crossbow. Crossbows typically have shorter draw lengths compared to compound bows. Because of this, heavier draw weights are required to achieve the same energy transfer to the arrow. These mechanically drawn bows also have devices to hold the tension when the bow is fully drawn. They are not limited by the strength of a single archer and larger varieties have been used as siege engines.

Types of arrows and fletchings

The most common form of arrow consists of a shaft, with an arrowhead at the front end, and fletchings and a nock at the other end. Arrows across time and history have normally been carried in a container known as a quiver, which can take many different forms. Shafts of arrows are typically composed of solid wood, bamboo, fiberglass, aluminium alloy, carbon fiber, or composite materials. Wooden arrows are prone to warping. Fiberglass arrows are brittle, but can be produced to uniform specifications easily. Aluminium shafts were a very popular high-performance choice in the latter half of the 20th century, due to their straightness, lighter weight, and subsequently higher speed and flatter trajectories. Carbon fiber arrows became popular in the 1990s because they are very light, flying even faster and flatter than aluminium arrows. Today, the most popular arrows at tournaments and Olympic events are made of composite materials.

The arrowhead is the primary functional component of the arrow. Some arrows may simply use a sharpened tip of the solid shaft, but separate arrowheads are far more common, usually made from metal, stone, or other hard materials. The most commonly used forms are target points, field points, and broadheads, although there are also other types, such as bodkin, judo, and blunt heads.

Shield cut straight fletching – here the hen feathers are barred red

Fletching is traditionally made from bird feathers, but solid plastic vanes and thin sheet-like spin vanes are used. They are attached near the nock (rear) end of the arrow with thin double sided tape, glue, or, traditionally, sinew. The most common configuration in all cultures is three fletches, though as many as six have been used. Two makes the arrow unstable in flight. When the arrow is three-fletched, the fletches are equally spaced around the shaft, with one placed such that it is perpendicular to the bow when nocked on the string, though variations are seen with modern equipment, especially when using the modern spin vanes. This fletch is called the "index fletch" or "cock feather" (also known as "the odd vane out" or "the nocking vane"), and the others are sometimes called the "hen feathers". Commonly, the cock feather is of a different color. However, if archers are using fletching made of feather or similar material, they may use same color vanes, as different dyes can give varying stiffness to vanes, resulting in less precision. When an arrow is four-fletched, two opposing fletches are often cock feathers, and occasionally the fletches are not evenly spaced.

The fletching may be either parabolic cut (short feathers in a smooth parabolic curve) or shield cut (generally shaped like half of a narrow shield), and is often attached at an angle, known as helical fletching, to introduce a stabilizing spin to the arrow while in flight. Whether helical or straight fletched, when natural fletching (bird feathers) is used it is critical that all feathers come from the same side of the bird. Oversized fletchings can be used to accentuate drag and thus limit the range of the arrow significantly; these arrows are called flu-flus. Misplacement of fletchings can change the arrow's flight path dramatically.

Bowstring

Dacron and other modern materials offer high strength for their weight and are used on most modern bows. Linen and other traditional materials are still used on traditional bows. Several modern methods of making a bowstring exist, such as the 'endless loop' and 'Flemish twist'. Almost any fiber can be made into a bowstring. The author of Arab Archery suggests the hide of a young, emaciated camel. Njál's saga describes the refusal of a wife, Hallgerður, to cut her hair to make an emergency bowstring for her husband, Gunnar Hámundarson, who is then killed.

Protective equipment

A right-hand finger tab to protect the hand while the string is drawn

Most modern archers wear a bracer (also known as an arm-guard) to protect the inside of the bow arm from being hit by the string and prevent clothing from catching the bowstring. The bracer does not brace the arm; the word comes from the armoury term "brassard", meaning an armoured sleeve or badge. The Navajo people have developed highly ornamented bracers as non-functional items of adornment. Some archers (nearly all female archers) wear protection on their chests, called chestguards or plastrons. The myth of the Amazons was that they had one breast removed to solve this problem. Roger Ascham mentions one archer, presumably with an unusual shooting style, who wore a leather guard for his face.

The drawing digits are normally protected by a leather tab, glove, or thumb ring. A simple tab of leather is commonly used, as is a skeleton glove. Medieval Europeans probably used a complete leather glove.

Eurasiatic archers who used the thumb or Mongolian draw protected their thumbs, usually with leather according to the author of Arab Archery, but also with special rings of various hard materials. Many surviving Turkish and Chinese examples are works of considerable art. Some are so highly ornamented that the users could not have used them to loose an arrow. Possibly these were items of personal adornment, and hence value, remaining extant whilst leather had virtually no intrinsic value and would also deteriorate with time. In traditional Japanese archery a special glove is used that has a ridge to assist in drawing the string.

Release aids

Release aid

A release aid is a mechanical device designed to give a crisp and precise loose of arrows from a compound bow. In the most commonly used, the string is released by a finger-operated trigger mechanism, held in the archer's hand or attached to their wrist. In another type, known as a back-tension release, the string is automatically released when drawn to a pre-determined tension.

Stabilizers

Stabilizers are mounted at various points on the bow. Common with competitive archery equipment are special brackets that allow multiple stabilizers to be mounted at various angles to fine tune the bow's balance.

Stabilizers aid in aiming by improving the balance of the bow. Sights, quivers, rests, and design of the riser (the central, non-bending part of the bow) make one side of the bow heavier. One purpose of stabilizers are to offset these forces. A reflex riser design will cause the top limb to lean towards the shooter. In this case a heavier front stabilizer is desired to offset this action. A deflex riser design has the opposite effect and a lighter front stabilizer may be used.

Stabilizers can reduce noise and vibration. These energies are absorbed by viscoelastic polymers, gels, powders, and other materials used to build stabilizers.

Stabilizers improve the forgiveness and accuracy by increasing the moment of inertia of the bow to resist movement during the shooting process. Lightweight carbon stabilizers with weighted ends are desirable because they improve the moment of inertia while minimizing the weight added.

Shooting technique and form

Historical reenactment of medieval archery
 
Chief Master Sgt. Kevin Peterson demonstrates safe archery techniques while aiming an arrow at a target on the 28th Force Support Squadron trap and skeet range at Ellsworth Air Force Base, S.D., 11 October 2012.

The standard convention on teaching archery is to hold the bow depending upon eye dominance. (One exception is in modern kyūdō where all archers are trained to hold the bow in the left hand.) Therefore, if one is right-eye dominant, they would hold the bow in the left hand and draw the string with the right hand. However, not everyone agrees with this line of thought. A smoother, and more fluid release of the string will produce the most consistently repeatable shots, and therefore may provide greater accuracy of the arrow flight. Some believe that the hand with the greatest dexterity should therefore be the hand that draws and releases the string. Either eye can be used for aiming, and the less dominant eye can be trained over time to become more effective for use. To assist with this, an eye patch can be temporarily worn over the dominant eye.

The hand that holds the bow is referred to as the bow hand and its arm the bow arm. The opposite hand is called the drawing hand or string hand. Terms such as bow shoulder or string elbow follow the same convention.

If shooting according to eye dominance, right-eye-dominant archers shooting conventionally hold the bow with their left hand. If shooting according to hand dexterity, the archer draws the string with the hand that possesses the greatest dexterity, regardless of eye dominance.

Modern form

To shoot an arrow, an archer first assumes the correct stance. The body should be at or nearly perpendicular to the target and the shooting line, with the feet placed shoulder-width apart. As an archer progresses from beginner to a more advanced level other stances such as the "open stance" or the "closed stance" may be used, although many choose to stick with a "neutral stance". Each archer has a particular preference, but mostly this term indicates that the leg furthest from the shooting line is a half to a whole foot-length from the other foot, on the ground.

To load, the bow is pointed toward the ground, tipped slightly clockwise of vertical (for a right handed shooter) and the shaft of the arrow is placed on the arrow rest or shelf. The back of the arrow is attached to the bowstring with the nock (a small locking groove located at the proximal end of the arrow). This step is called "nocking the arrow". Typical arrows with three vanes should be oriented such that a single vane, the "cock feather", is pointing away from the bow, to improve the clearance of the arrow as it passes the arrow rest.

A compound bow is fitted with a special type of arrow rest, known as a launcher, and the arrow is usually loaded with the cock feather/vane pointed either up, or down, depending upon the type of launcher being used.

The bowstring and arrow are held with three fingers, or with a mechanical arrow release. Most commonly, for finger shooters, the index finger is placed above the arrow and the next two fingers below, although several other techniques have their adherents around the world, involving three fingers below the arrow, or an arrow pinching technique. Instinctive shooting is a technique eschewing sights and is often preferred by traditional archers (shooters of longbows and recurves). In either the split finger or three finger under case, the string is usually placed in the first or second joint, or else on the pads of the fingers. When using a mechanical release aid, the release is hooked onto the D-loop.

Another type of string hold, used on traditional bows, is the type favoured by the Mongol warriors, known as the "thumb release", style. This involves using the thumb to draw the string, with the fingers curling around the thumb to add some support. To release the string, the fingers are opened out and the thumb relaxes to allow the string to slide off the thumb. When using this type of release, the arrow should rest on the same side of the bow as the drawing hand i.e. Left hand draw = arrow on left side of bow.

The archer then raises the bow and draws the string, with varying alignments for vertical versus slightly canted bow positions. This is often one fluid motion for shooters of recurves and longbows, which tend to vary from archer to archer. Compound shooters often experience a slight jerk during the drawback, at around the last 1+12 inches (4 cm), where the draw weight is at its maximum—before relaxing into a comfortable stable full draw position. The archer draws the string hand towards the face, where it should rest lightly at a fixed anchor point. This point is consistent from shot to shot, and is usually at the corner of the mouth, on the chin, to the cheek, or to the ear, depending on preferred shooting style. The archer holds the bow arm outwards, toward the target. The elbow of this arm should be rotated so that the inner elbow is perpendicular to the ground, though archers with hyper extendable elbows tend to angle the inner elbow toward the ground, as exemplified by the Korean archer Jang Yong-Ho. This keeps the forearm out of the way of the bowstring.

In modern form, the archer stands erect, forming a "T". The archer's lower trapezius muscles are used to pull the arrow to the anchor point. Some modern recurve bows are equipped with a mechanical device, called a clicker, which produces a clicking sound when the archer reaches the correct draw length. In contrast, traditional English Longbow shooters step "into the bow", exerting force with both the bow arm and the string hand arm simultaneously, especially when using bows having draw weights from 100 lb (45 kg) to over 175 lb (80 kg). Heavily stacked traditional bows (recurves, long bows, and the like) are released immediately upon reaching full draw at maximum weight, whereas compound bows reach their maximum weight around the last 1+12 inches (4 cm), dropping holding weight significantly at full draw. Compound bows are often held at full draw for a short time to achieve maximum accuracy.

The arrow is typically released by relaxing the fingers of the drawing hand (see Bow draw), or triggering the mechanical release aid. Usually the release aims to keep the drawing arm rigid, the bow hand relaxed, and the arrow is moved back using the back muscles, as opposed to using just arm motions. An archer should also pay attention to the recoil or follow through of his or her body, as it may indicate problems with form (technique) that affect accuracy.

Aiming methods

From Hokusai Manga, 1817

There are two main forms of aiming in archery: using a mechanical or fixed sight, or barebow.

Mechanical sights can be affixed to the bow to aid in aiming. They can be as simple as a pin, or may use optics with magnification. They usually also have a peep sight (rear sight) built into the string, which aids in a consistent anchor point. Modern compound bows automatically limit the draw length to give a consistent arrow velocity, while traditional bows allow great variation in draw length. Some bows use mechanical methods to make the draw length consistent. Barebow archers often use a sight picture, which includes the target, the bow, the hand, the arrow shaft and the arrow tip, as seen at the same time by the archer. With a fixed "anchor point" (where the string is brought to, or close to, the face), and a fully extended bow arm, successive shots taken with the sight picture in the same position fall on the same point. This lets the archer adjust aim with successive shots to achieve accuracy.

Modern archery equipment usually includes sights. Instinctive aiming is used by many archers who use traditional bows. The two most common forms of a non-mechanical release are split-finger and three-under. Split-finger aiming requires the archer to place the index finger above the nocked arrow, while the middle and ring fingers are both placed below. Three-under aiming places the index, middle, and ring fingers under the nocked arrow. This technique allows the archer to better look down the arrow since the back of the arrow is closer to the dominant eye, and is commonly called "gun barreling" (referring to common aiming techniques used with firearms).

When using short bows or shooting from horseback, it is difficult to use the sight picture. The archer may look at the target, but without including the weapon in the field of accurate view. Aiming then involves hand-eye coordination—which includes proprioception and motor-muscle memory, similar to that used when throwing a ball. With sufficient practice, such archers can normally achieve good practical accuracy for hunting or for war. Aiming without a sight picture may allow more rapid shooting, not however increasing accuracy.

Instinctive shooting is a style of shooting that includes the barebow aiming method that relies heavily upon the subconscious mind, proprioception, and motor/muscle memory to make aiming adjustments; the term used to refer to a general category of archers who did not use a mechanical or fixed sight.

Physics

Mongol archers during the time of the Mongol conquests used a smaller bow suitable for horse archery.

When a projectile is thrown by hand, the speed of the projectile is determined by the kinetic energy imparted by the thrower's muscles performing work. However, the energy must be imparted over a limited distance (determined by arm length) and therefore (because the projectile is accelerating) over a limited time, so the limiting factor is not work but rather power, which determined how much energy can be added in the limited time available. Power generated by muscles, however, is limited by force–velocity relationship, and even at the optimal contraction speed for power production, total work by the muscle is less than half of what it would be if the muscle contracted over the same distance at slow speeds, resulting in less than 1/4 the projectile launch velocity possible without the limitations of the force–velocity relationship.

When a bow is used, the muscles are able to perform work much more slowly, resulting in greater force and greater work done. This work is stored in the bow as elastic potential energy, and when the bowstring is released, this stored energy is imparted to the arrow much more quickly than can be delivered by the muscles, resulting in much higher velocity and, hence, greater distance. This same process is employed by frogs, which use elastic tendons to increase jumping distance. In archery, some energy dissipates through elastic hysteresis, reducing the overall amount released when the bow is shot. Of the remaining energy, some is dampened both by the limbs of the bow and the bowstring. Depending on the arrow's elasticity, some of the energy is also absorbed by compressing the arrow, primarily because the release of the bowstring is rarely in line with the arrow shaft, causing it to flex out to one side. This is because the bowstring accelerates faster than the archer's fingers can open, and consequently some sideways motion is imparted to the string, and hence arrow nock, as the power and speed of the bow pulls the string off the opening fingers.

Even with a release aid mechanism some of this effect is usually experienced, since the string always accelerates faster than the retaining part of the mechanism. This makes the arrow oscillate in flight—its center flexing to one side and then the other repeatedly, gradually reducing as the arrow's flight proceeds. This is clearly visible in high-speed photography of arrows at discharge. A direct effect of these energy transfers can clearly be seen when dry firing. Dry firing refers to releasing the bowstring without a nocked arrow. Because there is no arrow to receive the stored potential energy, almost all the energy stays in the bow. Some have suggested that dry firing may cause physical damage to the bow, such as cracks and fractures—and because most bows are not specifically made to handle the high amounts of energy dry firing produces, should never be done.

Snake Indians - testing bows, circa 1837 by Alfred Jacob Miller, the Walters Art Museum

Modern arrows are made to a specified 'spine', or stiffness rating, to maintain matched flexing and hence accuracy of aim. This flexing can be a desirable feature, since, when the spine of the shaft is matched to the acceleration of the bow(string), the arrow bends or flexes around the bow and any arrow-rest, and consequently the arrow, and fletchings, have an un-impeded flight. This feature is known as the archer's paradox. It maintains accuracy, for if part of the arrow struck a glancing blow on discharge, some inconsistency would be present, and the excellent accuracy of modern equipment would not be achieved.

The accurate flight of an arrow depends on its fletchings. The arrow's manufacturer (a "fletcher") can arrange fletching to cause the arrow to rotate along its axis. This improves accuracy by evening pressure buildups that would otherwise cause the arrow to "plane" on the air in a random direction after shooting. Even with a carefully made arrow, the slightest imperfection or air movement causes some unbalanced turbulence in air flow. Consequently, rotation creates an equalization of such turbulence, which, overall, maintains the intended direction of flight i.e. accuracy. This rotation is not to be confused with the rapid gyroscopic rotation of a rifle bullet. Fletching that is not arranged to induce rotation still improves accuracy by causing a restoring drag any time the arrow tilts from its intended direction of travel.

The innovative aspect of the invention of the bow and arrow was the amount of power delivered to an extremely small area by the arrow. The huge ratio of length vs. cross sectional area, coupled with velocity, made the arrow more powerful than any other hand held weapon until firearms were invented. Arrows can spread or concentrate force, depending on the application. Practice arrows, for instance, have a blunt tip that spreads the force over a wider area to reduce the risk of injury or limit penetration. Arrows designed to pierce armor in the Middle Ages used a very narrow and sharp tip ("bodkinhead") to concentrate the force. Arrows used for hunting used a narrow tip ("broadhead") that widens further, to facilitate both penetration and a large wound.

Hunting

A modern compound hunting bow

Using archery to take game animals is known as "bow hunting". Bow hunting differs markedly from hunting with firearms, as distance between hunter and prey must be much shorter to ensure a humane kill. The skills and practices of bow hunting therefore emphasize very close approach to the prey, whether by still hunting, stalking, or waiting in a blind or tree stand. In many countries, including much of the United States, bow hunting for large and small game is legal. Bow hunters generally enjoy longer seasons than are allowed with other forms of hunting such as black powder, shotgun, or rifle. Usually, compound bows are used for large game hunting due to the relatively short time it takes to master them as opposed to the longbow or recurve bow. These compound bows may feature fiber optic sights, stabilizers, and other accessories designed to increase accuracy at longer distances. Using a bow and arrow to take fish is known as "bow fishing".

Modern competitive archery

Competitive archery involves shooting arrows at a target for accuracy from a set distance or distances. This is the most popular form of competitive archery worldwide and is called target archery. A form particularly popular in Europe and America is field archery, shot at targets generally set at various distances in a wooded setting. Competitive archery in the United States is governed by USA Archery and National Field Archery Association (NFAA), which also certifies instructors.

Para-archery is an adaptation of archery for athletes with a disability, governed by the World Archery Federation (WA), and is one of the sports in the Summer Paralympic Games. There are also several other lesser-known and historical forms of archery, as well as archery novelty games and flight archery, where the aim is to shoot the greatest distance.

Natural science

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...