Category Archives: Science and Math

Science and Math

Science and Math

The Genius of Chickens

I have lived around chickens all my life. My grandparents had a small egg farm, and when they retired, they always had a couple dozen chickens and ducks around for no other reason than that they liked their character. I live in the suburbs now, but we are allowed three hens, and their vicissitudes form a steady beat of panic and joy amidst the routine of Human Stuff that needs dealing with. So, when I came across Carolynn L. Smith and Sarah I. Zielinski’s article in the most recent Scientific American about new evidence for intelligence and even empathy in chickens, I rejoiced at scientific attention finally being focused on the complex social life of these wonderful animals.

 

Smith has been running experiments to test how chickens use their language in different situations, and the results show a degree of cunning that one usually only associates with humans or, let’s face it, cats. For instance, they have a call which means “there is a predator coming down from the sky” but they are very selective when they use it. As Smith explains, “A rooster that sees a threat overhead would make an alarm call if he knows there is a female nearby, but he would remain silent in the presence of a rival male.” More than that, if the rooster is in cover and sees a rival out by himself in the field, he will go ahead and make the call anyway, knowing that it will draw attention to his rival while costing himself nothing. That’s some meta-level calculation going on there, and speaks of a depth of consciousness not often associated with these animals.

 

More endearingly (though, I suppose if you’re a fan of devious critters, the above was rather endearing), they have a marked capacity for empathy and concern. I notice this all the time at home – when my daughter takes one of the chickens out of their run area to pet for a while, the other two will pace back and forth making panicked little noises until their friend is returned. Those three are an inseparable unit very invested emotionally in each other’s welfare. Organized research has substantiated these ideas, showing that mothers who see their chicks in unusual situations will exhibit stress signals and clucking noises until things are set aright – they have the ability to imagine what another chicken is dealing with and to empathize with it appropriately.

Chickens… they’re pretty neat. Let’s stop being dicks to them.

All of which should make humanity feel like proper asses for the way we treat these smart and affectionate animals, shoving them into factories with no room to move, injecting them with growth hormones and then slaughtering them at a tenth of their usual life span, before too much inflicted malformation sets in. It is a practice unworthy of an enlightened species, as we claim to be. If ignorance of their mental capacity was our excuse before, that won’t serve now.

 

If you want to help this species out a bit, here is an easy first step, a petition to end factory farming in Canada. If you must buy ten minutes of stimulated taste receptors at the cost of another sentient being’s life (and, to be frank, every time you eat meat, that’s the grossly uneven trade you’re making), you can at least make sure that that life is as decent as possible.

 

This isn’t a trend which will reverse itself any time soon. The more we learn about animals, the more we find neural structures that enable them to experience bits and snatches of our own rich emotional life, and the harder it will be to justify our millennia of abusive and callous stewardship.

Books Science and Math

Humans are Great 8: Feynman’s Mirror

One of the unfortunate things we humans tend to do is rate a genius for invention as superior to a genius for explanation.  We stand with (rightful) awe before the original insights of a Bernhard Riemann but shrug off the efforts of people who took brilliant but convoluted existing ideas and found a way for the mass of humanity to gain some purchase on them.  But if something like calculus, which stumped a continent at its first unveiling, is second nature to sixteen and seventeen year old high schoolers now, it is largely because of those people who had a genius for reforming the clunky and abstract into something graspable but still faithful to the rigor of the original.

To be either a creative or explanatory genius is quite enough to earn our dazzled esteem, but to be both is to enter a slim minority of world figures indeed.  Charles Darwin was one such, and I would rank English mathematician GH Hardy as another, but for most science-y people, if you say the words “brilliant explainer” and “genius scientist” in the same breath, they will respond, “Oh, you mean like Richard Feynman?”

And deservedly so.  Yes, he’s been rather – merchandized – as of late, and with that over-exposure has come something of a backlash.  “Oh, Feynman?  I’m so done with that guy.”  But if we step back, away from the t-shirts and novelty coffee mugs, maybe we can recall for a bit what made us fall in love with him in the first place.

 

 

For me, there is no better demonstration of him at his very best than the Mirror Example in his QED series of lectures.  It is the quintessence of everything admirable about Feynman’s mind – the ability to take a vastly thorny concept and craft a physical example that retains all of the essential features of the original while smoothing out the parts that contribute formally but not comprehensibly to the whole.

What Feynman is trying to illustrate with the example is how Quantum Electordynamics weighs and combines different possible interactions for a given set of particles to calculate expected observable values.  He asks us to consider how a mirror works, and starts off the way every good science explanation should, with a confidence builder.

He reminds us of the law of reflection, which says that the angle of reflection for light bouncing off a mirror is equal to the angle of incidence (the angle it came in at).  “I remember that!” we all say, and feel that excited willingness to push on that only comes with an initial burst of confidence.  Also, we now have an anchor to come back to if we feel ourselves getting lost.  These are fundamentals of good explanation practice that Feynman just intuitively felt, and are what make him so compelling to read still, a half century on.

Having established a solid base, he starts branching outward.  What if I told you that, in fact, the path where the angle of incidence equals the angle of reflection is just one possible option, the one that takes the least time to travel, granted, but that there are many more paths which light can, and does take?

 

We get excited – something that we knew for sure was right turns out to have a little bit of devil living inside of it.  And Feynman uses that excitement to start talking about probability vectors, something that most people wouldn’t have immediately found themselves interested in, but that now, eager to resolve the mystery, they will pay rapt attention to.  He tells us how to construct vectors for different possible pathways (say, A-D-B or A-E-B in the above figure from the foundational Feynman Lectures), and uses those vectors to construct a total picture of all possible reflections off the mirror:

 

 

What was an obscure concept involving vector addition and complex exponentials thus transforms, through his flair for turning mathematical machinery into physical representation, into this picture which beautifully represents how reality works.  Yes, there are lots of alternate pathways, but the ones at the edges of the mirror tend to cancel each other out, since they all point different ways, so the behavior that we witness is primarily created by the middle of the mirror, where the angle of incidence equals the angle of reflection.  So, at the end of the day, the law we learned in high school is largely true, from a certain point of view.

But then, like all good magicians, he saves his last trick for the moment when we feel comfortable and reestablished in the world.  We can, he informs us, by scraping away the parts of the mirror that cancel out the contributions from the edges, make those edges contribute again.  So, if we wanted, we could purposefully construct mirrors that break the law of reflection after all.  Thank you, quantum mechanics.

And so, from safety, through excitement, to comprehension to safety to daredevilry, Feynman has taken something outside the veil of everyday thought and brought it home to us all.  It’s that willingness to take some time to work on MERE explanation that I love about him, and about those generations and generations of teachers who sit up at nights trying to find new ways to illustrate our scientific heritage to coming generations.

Philosophy Science and Math

William Lane Craig’s Seven Reasons for God’s Existence

William Lane Craig has a surprise for us.  In the newest (Nov/Dec 2013) issue of Philosophy Now, he announces that, not only is philosophical theism not dead, but it is actually the most vibrant part of modern American philosophy, beating archaically positivist atheists back in chaotic retreat whenever it unfurls its revolutionary new arguments for God’s existence.

And what’s more, Craig confidently claims, in the space of four pages he is going to present us seven of the freshest, most undeniable arguments that point towards the existence of God yet produced from this flourishing legion of great minds.  I admit to being rather excited to read something new at long last, something that would really shake the foundations of my weaker assumptions and force me to grapple again with my philosophical principles.  Sitting up with anticipation, I proceeded to the first of these brand new, entirely irrefutable arguments….

And it was the First Cause Argument, stated substantially the same way it was when its inner contradictions were revealed as such a century and a half ago.  Gaze at these two initial steps, if you would:

1.  Every contingent thing has an explanation of its existence.

2.If the universe has an explanation of its existence, that explanation is a transcendent, personal being.

Imagine my disappointment that, not only isn’t this version an update or improvement on what has gone before, but it slips into the non-qualitative equivalence trap that the better versions of this argument have at least attempted to address for a while now (namely, that the first step sets up an analogy, but the second introduces (or, “slips in” if you’re feeling uncharitable) a qualitatively different event that breaks the chain of analogical reasoning).

Fine, then, the first argument doesn’t precisely break revolutionary ground.  Perhaps the second will:

2. God is the best explanation of the origin of the universe.

Or, he could just restate the structure of the first argument with a little bit different evidence.  Which is what he, in fact, does.  The new evidence is the Vilenkin Theorem that the universe must have a definite beginning.  Again, it’s a modified Aristotilean argument by analogy, and again the same problem of hidden qualitative distinctions rears its head.  We can give him that it’s possible the universe had a definite beginning and that cyclical or chaotic models might ultimately prove untrue.  But that doesn’t give quite the stretch-room he needs here.  He needs creation from nothing to be qualitatively similar to the re-configuration of existing matter that usually brings “new” objects into existence, otherwise the analogy doesn’t work, and unfortunately those two acts are about as dissimilar as can be, and to argue from the prerequisites of the latter backwards to the implied prerequisites of the former is just irresponsible.  And that’s been common knowledge for a while now.

 

Moving along, the third and fourth arguments, because they both come from the same place and suffer from the same problems:

 

3. God is the best explanation of the applicability of mathematics to the physical world.

4. God is the best explanation of the fine-tuning of the universe for intelligent life.

 

Argument three evinces a distinct disregard for the work in the philosophy of mathematics done over the past century.  It over-emphasizes math as a static body of knowledge and fails to mention anything about mathematics as a method, its assumptions and techniques, and how those might or might not be effective at engaging with the universe.  Only by confronting the research done in that field can you even start making statements about how “coincidental” the correspondence of certain parts of mathematics as they are currently understood with the physical universe as it is currently understood might be, and how much of a miraculous intercession is necessary to cover that supposed coincidence.  To make these statements without mentioning the work of Pickering or Plotnitsky is to hold up an easy and uncomplicated ideal in place of messy reality, which is lazy at best and consciously deceptive at worst.

 

As to four, it’s the Sweet Spot argument writ universal, and, of course, the problem with it is that it is devastatingly myopic.  He says that the constants of the universe are so arrayed within the thinnest sliver of possible values to make life as we know it possible, and therefore the life-sustaining nature of the universe is a sign that it has been designed for life, by somebody.  God.  Ignoring all the more obvious problems of circularity that the argument has dragged with it for the better part of a century, what I always find a curious oversight is the fact that, just as surely as human life exists in this universe during this slice of time, so will it surely not exist in another slice of time, not too far removed from our own.  The sun will explode, and even if we escape that, there is an expiration date on matter’s cohesion in an expanding universe that is running a race with entropy to wipe us out no matter where we go.  The universe is a short-term life sustainer, but a long-term life destroyer.  To favor the former aspect over the latter is understandable if you think that Jesus is going to show up and whisk everybody away before all of the bad stuff happens, but if you’re starting from a blank slate of belief, the construction of the universe seems so overwhelmingly against the long-term existence of humanity that only a God with the sadistic instincts of a house cat would have so designed it.

 

5. God is the best explanation of intentional states of consciousness.

This argument supposes that mere materialism cannot account for the Aboutness of human thought.  It absolutely can, and a fair number of the neurochemical pathways that allow us to access and coordinate memories in conjunction with received stimuli have been mapped in loving detail by an army of quietly diligent heroes whose names we’ll never bother to know.  Yes, thoughts seem like they are very subjective and outside of your mere physical matter.  But they’re not.  They’re a chain of chemical reactions pushed by other chemical reactions, and our experience of believing ourselves to be having a thought is itself, you guessed it, a recursively sustained chemical reaction, a war of inhibitors and neurotransmitters all galloping along with our primate DNA to sift through the world’s offerings for the most important bits of data.  But, hey, at least this argument is merely five decades old.  So, progress.

 

6. God is the best explanation of objective moral values and duties.

This was CS Lewis’s big starting argument in Mere Christianity, back in 1943, and of course goes back before then.  It was a somewhat forgivable argument for the forties, but is utterly indefensible in the face of what we have learned since about the origins of empathy from primatology, and of the nature of our decision pathways from neurobiology.  We have discovered more and more instances of supposedly Human Exclusive moral behavior in our research of animals, pushing the uniqueness of our ethical behavior into a narrow scope so obviously linked to what came before that to suggest the need for a divine source is to be astonishingly unwilling to engage with the past half century of research on the subject.

Thence to the big finale…

7. The very possibility of God’s existence implies that God exists.

Yep, the ontological argument, that revolutionary new idea from the eleventh century.  A part of me was hoping that Craig would save his most daring and interesting argument for last, and the groan of disappointment I uttered upon reading that line resonated through the house.    Craig adds nothing we haven’t seen before, and this argument has been dealt with too many times to even bother with a recapitulation of its manifold flaws.

 

What started off with bold and heady claims for originality, for a new wave of Christian theology which would blow the lid off everything we thought we knew, turned out, then, to be little more than a limping through common ground that, at its freshest, grazed the 1960s, but mostly kept itself safely with centuries-old wisdom firmly restated with all the long-observed warts still manifestly in place.  A decided letdown.

Culture Music Science and Math

Humans are Great 5: Mathematicians and their Music

I was chatting with a Ukrainian friend the other day when she asked me, “Do you play any musical instruments?”  I admitted that I could, by certain not terribly high standards, be called a piano player.  “A-ha!  I knew it.  Math people are always music people,” she responded triumphantly, and started to list off all the people she knew who had a combined love of math and classical music.

Of course, we in the United States are bound to take all utterances from Ukrainians on the subjects of music, math, and ballet as unquestionably true.  But there’s a lot of supplementary evidence as well, from great mathematicians and physicists who either played an instrument or had a deep and profound love of music, to the necessary connections between what is great about math and what is great about music that attract one and the same mind.

 

 

It’s the structural similarities that get me.  Mathematics is the art of saying a universe while bound by formalist fetters of the toughest stuff.  Every word, every turn, has to bear the scrutiny of an epoch of rigor.  When you find something new to say within those confines, you’ve pulled off an unparalleled act of creation.  A stunning proof can get me positively teary-eyed, and it’s that exact same structure of finding creativity in the face of impossible restriction that touches me in classical music.

I’m going to take an extreme example because, hey, it’s the Holidays.  Consider the last movement of Beethoven’s Appassionata Sonata.  It is from his stormy middle period and is often used in film when they need a piece of piano literature for an unhinged virtuosic criminal mastermind to thrash out in the solitude of his mountain fortress.  Or maybe I just feel like it should be.  In any case, the restrictions are profound.  Leave out a note, and you’ve ruined it.  Ignore a dynamic marking, and you will be dropped from all men’s esteem.  Considering the freedom that you have as a pop star when covering a song to do pretty much whatever you damn well please as long as something like the melody of the chorus creeps through, it seems like there would be nothing left to individual human creativity when playing this piece of music.  We should have a hundred recordings, each a metronomical copy of the other, the only difference being the quality of the sound equipment employed.

And we do have a hundred recordings, but the amount of variation that the performers have squeezed out over the years within the constraints set by Beethoven is astounding.  Here is Wilhelm Kempff, one of the greats, performing it with his immaculate attention to the possibility for dynamic change within each measure (fast forward to 15:43 to get the third movement):

 

Now, compare that to Sviatoslav Richter’s performance, which basically conceives of the movement as an exercise in titanic thrash metal.  He is about speed and ferocity.  All the notes are the same, but the philosophical center of the piece is wildly different.

 

 

As I said, these are two extremes of an already extreme piece of music.  Part of the endless joy of classical music for my math-snuggling mind is sniffing out moments where performers do something unspeakably subtle that is entirely within the rules but that changes utterly the flavor of a piece, savoring that human ability to express individuality in the most seemingly unpromising situations.  Those moments have all the thrill of finding buried treasure, precisely because they are so hard to accomplish.  Further, once that new variation is discovered, it is added to our total experience of the piece, always there in the background, defining what comes after, so that each new performance is really a communication with all those that have come before.  Just as a mathematical proof is a conversation with Euler and Lagrange and Hilbert, so is each new Appassionata recording a piece of art that bears with it the decisions made by Kempff and Richter and thousands of others, and the more records you listen to, the better and richer each new record becomes.

So, get listening!

Culture Science and Math

The Disappearing American Science Student

Don’t they teach recreational mathematics anymore?!” – Doctor Who

 

No, Doctor, they don’t.  At least not according to Harold Levy’s sobering article in the new (Dec 13) issue of Scientific American, which rolls out some truly dispiriting statistics about the state of science and math enthusiasm in the United States.  For example, we learn that, in 2001, 65 percent of all electrical engineering doctorates awarded in the United States were given to foreign students, and that in 2009 46 percent of all master’s degrees in computer science went to students on foreign visas.

 

And no, the phenomenon has nothing to do with Diversity Quotas, so you can put that speech away, and everything to do with our inability to produce inspired and inspiring first-tier college students out of our high school system.  As a calculus and physics teacher since 2003, parent since 2004, and private tutor since my high school days, I’ve been watching this trend, first-hand, from a few different angles.  The good news is that the educational community is by no means taking these trends lying down, and some very exciting things are in the works which stand to make us a much more scientifically literate nation.

 

One of the things that I, and many math-first people of my ilk, have done much wrong-headed grumbling about is the rise of the Conceptual Science curriculum.  It started with physics, is making its way into chemistry, and is basically an attempt to give people solid scientific instincts independent of advanced mathematical skills.  Originally, the idea was that these Conceptual classes would be a good place to stuff struggling students, so as to cut some of the dead weight from the normal and honors physics classes.

 

Which led to unfortunate things, because the teachers that were stuck with the Conceptual classes tended to be on the bottom of the seniority poll, and so you had rookies teaching castoffs which, in spite of what the movies say, ends rather more often in disaster than inspiration.  But then people started realizing the raw potential here.  To illustrate, consider the following two problems, the first a typical physics class question, and the second a typical Conceptual physics question:

 

  1.  Two forces act on a 4 kg rope in opposite directions, one of magnitude 300 Newtons, and the other of magnitude 500 N.  Calculate the Tension in the Rope and the acceleration of the system.

 

  1. Two guys engage in a game of tug-of-war.  If they both pull with 200 N of force, what’s the tension in the rope?  Now, what would the tension be if we replaced one of the guys with a tree?

 

The first invites the student to construct a free body diagram, derive the relevant Newtonian equations, and solve for some desired variables.  All very standard and expected.  The second asks you to think, really think, about just what is going on here.  DOES it matter if I replace a man pulling backwards with a stationary tree?  Shouldn’t it?  But maybe not… why?

 

I try to incorporate these moments whenever I can into my AP Physics class, to break the students out of their very meticulously learned algorithms, and make them think about the actual physicality of what’s going on, to develop sure scientific instincts about what matters and what doesn’t, to get them debating about the variables and how they come into play.  It’s that intuition that my parents’ generation had but that, swamped with the need to perform well on standardized tests, was systematically murdered by the educational system over the course of several decades.  It is recreational- you are playing and weighing and arguing and having a grand old time talking about a rope and a tree, which is precisely the sort of free intellectual play that sustains people in their interest to pursue the rigorous course of a scientific education.

 

So, that’s all great, and it’s getting injected more and more into all levels of the high school curriculum.  But it’s not quite enough.  It’s not enough to just think about science and math until the end of your assigned problem set.  We need kids who actively choose to spend their leisure investigating problems that they find interesting, delving more deeply into topics they find compelling.  And that is all about parental modeling – the kiddos need to see from their earliest days that, the work day done, their parents don’t just flop insensibly into the warm and easy embrace of television, booze, or incessant Facebook nattering.

 

They need to see parents with intellectual hobbies, really ANY intellectual hobbies – a dad who takes a half an hour each night to read through some poetry, not because it broadens his education, but because he actually enjoys it.  A mother who has a few Erlenmeyer flasks in the garage for an experiment now and again.  Something that shows the kids that the care of one’s mind can actually be a joy far surpassing mere satiation.  They take their lessons in the use of recreational time from us – in many ways it’s the most important thing we have to teach them, and the one easiest to neglect.

 

But.  If we do make a sort of civilizational commitment to being mindful of our leisure hours, and if we do continue to find ways to structure curriculum to spark surprise and argument instead of the comparative ease of an expected algorithm, we have a chance to raise a remarkable generation of thinkers.

Science and Math

Humans Are Great 2: The Popcorn Function

Mathematics is the summit of everything I find wonderful about mankind.  It requires the most rigorous thinking of which we are capable married to an unflinching creativity, astounding sense of space and movement, and a poetic regard for the pregnancy of words.  Technically, I suppose that’s a marriage on the polygamous side, but I’m all for that too.  In any case, once you get past the decade-long tutorial, learning the names and rules for all the different tools, you get to start having fun trying to Break Math.  Seeing mathematicians hot on the hunt for something that will tear down a millennia-long assumption is really quite beautiful, and another example of humans just being great.

One of my favorite examples of Math Gone Mad is called (among other less whimsical names) the Popcorn Function.  It goes like this:

 

F(x) = { 1/q if x is a rational number of the form p/q.

0 if x is irrational. }

 

And here is a snapshot of a part of it.

 

The Popcorn Function!

 

It’s popularly called the popcorn function because all of the rational x’s pop up to one over their denominator, while all of the irrationals stay stuck on the x-axis.  Now, think back to your high school Pre-Calculus or Calculus class.  You might remember a working definition of continuity that says, “A graph is continuous if you don’t have to lift your pencil while drawing it.”  Just looking at this picture, it is hard to picture something LESS continuous-seeming.

AND YET, it turns out that this function is continuous at all irrational numbers but discontinuous at all rational numbers.

That seems a rather wildly improbable statement, and yet the proof of it is delightfully uncomplicated, and in fact is something you might want to whip out at your next cocktail party while the Catan board is getting set up.  It all relies on a more rigorous definition of continuity, known as the ε-δ definition.  Just written out, it looks horrid:

 

“A function f(x) is continuous at x=a if, for any ε > 0, there exists a δ > 0 such that if |x-a| < δ, then |f(x) – f(a) | < ε.”

 

When I introduce this to my calculus students, there is usually a fair amount of rending of clothing and gnashing of teeth, but the idea is actually very simple: “If two x values, let’s say a and b, are close to each other, then f(a) and f(b) should be close to each other too.”  It’s the pencil requirement written mathematically – to move right a little bit while drawing my curve I shouldn’t have to move up or down very far.

So, to prove that something is continuous, I have to show that, for any value of epsilon (ε), no matter how small, I can find a neighborhood of x values around x=a that all end up within ε of f(a).  Alternately, to prove that a function is NOT continuous at x=a I just need to produce a value of ε for which it is impossible to find such a neighborhood around x=a.

Now, I said that the Popcorn Function is continuous at every irrational number and discontinuous at every rational number.  Let’s start with the easy part, proving that the rationals are discontinuous.  To do it, I’m going to use a smashing attribute of the number line – that the irrationals and rationals are “dense.”  That means that, no matter how small a step I take from a rational number, I’m going to cross infinitely many irrationals, and no matter how small a step I take from an irrational number, I’m going to cross an infinite number of rational numbers along the way.  Any neighborhood, no matter how small, of any number will contain infinitely many other irrational and rational numbers.  There is just as much richness to contemplate from 0 to 1 as from negative infinity to positive infinity.

So, let’s say that my “a” value is rational, so f(a) = 1/a.  I’m going to choose 1/2a as my ε value.  Now, no matter what value of delta I choose, there are going to be infinitely many irrational x-values within that neighborhood of a, all with a function value of 0.  So, |f(x)-f(a)| for those irrational x’s will equal 1/a, which is more than our ε value.  So, not all points within any delta of a will end up within ε of f(a), so the function is not continuous at x=a if a is rational.  Neat!

But we have barely begun to climb Mt. Nifty.  Now, suppose a is irrational (so, f(a) = 0), and that I choose some random, rational value for ε (if the fact that I’m limiting ε to rational numbers disturbs you, good, but if you really want to use an irrational ε, I can always find a rational one both smaller than it but still positive, and use that ε for the proof).  Epsilon, being rational, has an integer denominator, let’s call it q.  So, all I need to do is find a delta neighborhood around “a” that definitely does not contain any x values with a denominator smaller than q.

And, it turns out, I can do that.  Think about it.  Let’s say my “a” is equal to 2 point something something something.  Now, between 2 and 3 there is only one reduced fraction with denominator equal to 2 (namely, 5/2), only 2 with denominator equal to 3 (7/3 and 8/3), only 2 with denominator equal to 4 (9/4, and 11/4), and so on.  The point being, that no matter how big q (the denominator of my original ε) is, there are only a finite number of rational values around a with a smaller denominator.  Since there are only finitely many, one of them will be CLOSEST to “a”.  If I choose my delta just smaller than that distance, I am absolutely guaranteed that no x value within that delta neighborhood will have a denominator smaller than q, and as such, f(x) will always be less than ε, and so, at x=a, the function is continuous!!

And one more ! for good measure.

So, in spite of the fact that there are infinitely many places where this function is hopping up off the number line, it is actually, technically, continuous at every single irrational number.  What’s even weirder is that, and here I’m going to turn to the calculus-remembering folk for a bit, this function is actually integrable too, since its set of discontinuities, the rational numbers, is countable!  *Electric Air Guitar Riff!*

Wrapped up in this one function is a large part of all my favorite stuff about math and about the humans who make it.  There are some spectacularly clean definitions that have been seized upon by some wonderfully playful minds to create an object that breaks every bond of common sense.  It’s the same process or rules-brokered explosive creativity you see in Beethoven’s Third Symphony, or the perspective tinkering of a Braque canvas, only rendered, at least for me, several orders of magnitude more exciting by virtue of being so ethereal, so elusively abstract.

It’s like I always say: If you love poetry, you’ll love math more.  Eventually.

 

FURTHER READING: If you liked that function, there are tons of other such to be had out there.  A great place to start is Bernard Gelbaum and John Olmsted’s Counterexamples in Analysis, which is a book of nothing but dastardly clever things that seem to defy common sense.  To get most of it, though, requires something of a background in Real Analysis, for which Charles Pugh’s Real Mathematical Analysis is a great starting point that just about anybody can dive into right away!