The bubblegum theory of language is a metaphor for how we use words. It works best with ostensive definitions, and it's a good metaphor insofar as people behave consistently with it most of the time. It starts to break down with abstract concepts, and it fails in that hardly anyone actually thinks they're doing it this way when they're using language. Here's the gist of it: when I heard that the thing I was sitting on is called a "chair," I stuck a piece of imaginary bubblegum to that "pile of atoms arranged chair-wise" (which for simplicity I shall simply call a "chair" from here on, just please remember that this is strictly a language of convenience), and I put the label of "chair" on the other end. I carry this label wherever I go, sticking that bubblegum to all the other chairs I encounter. I also have a piece of bubblegum for "my desk chair at home," which only attaches to one chair in the world. There's also a piece of bubblegum for "my desk chair at work," which also attaches to exactly one chair in the world. Both of those chairs, however, are attached by another piece of bubblegum to the label of "my chair." I also have labels and bubblegum for armchairs, high chairs, recliners, and so forth.
So what do I do with all these labels? Well, I place them all into a box labelled "chairs." And this box itself shares a space with my "tables" box, and my "couches" box, and my "armoires" box, and many other boxes in a much larger box labelled "furniture." So I've got all these boxes which contain other boxes which themselves contain labels which are attached by bubblegum to piles of atoms out in the world in a big sticky mess: that's language.
Here's the important part: there are no "things in the world" contained in these boxes, the boxes only contain the labels1. Furthermore, the boxes, labels, and bubblegum are all purely imaginary. This is, in at least one very important way, how language works; this is going to require some serious unpacking, though.
The boxes are categories, the labels are concepts, and the huge web of bubblegum connects my concepts to their referents out in the world (albeit in a purely conventional way). But concepts are not themselves the things out in the world, they simply represent my understanding of those things - they are merely ideas, in other words, and ideas are only in our heads. They're very useful, to be sure, and they may correspond to reality and cohere with each other to varying degrees, but they're still ideas in our heads. The important part here is that there is a very real difference between "a thing" and "an idea of a thing," and though we may conflate the two in everyday language (which is usually fine), we should be aware on some level that there is still a crowbar separation between the two (to borrow Eddie Izzard's turn of phrase).
So far, so good. Language is fueled by concepts and concepts are different from the things in the world that they are supposed to represent. None of this should really be controversial so far (at least, I hope not), but there are some surprising implications to be drawn from this.
Our knowledge about the world is propositional2: to know something, there must be a proposition that can express that knowledge. Propositions can have truth value. Things out in the world do not have truth value. Propositions must be expressed in terms of language, and moreover, require syntax and grammar and all sorts of other made-up stuff. Things out in the world simply are what they are and don't need anything from us to go on being as they are (in other words, reality just is what it is, regardless of what we would like to say, think, or do about it). Language is fueled by concepts, though, and concepts are fundamentally divorced from the things in the world which they represent3. This is a problem: propositions, the only way we can express our ideas about things in the world, can have truth value; but they are ultimately unable to correspond completely to the things in the world about which we wish to talk.
This is a constraint of the system, and there's no getting around it. It's kind of like the way that reason can never be externally justified, because justification requires a reasoning process, and now you're justifying reason itself with a reasoning process, which is circular and doesn't really accomplish anything. Language is the only way we can talk about the world, it's the only way we can check for truth value (whether or not a proposition corresponds to reality), but language will always be fundamentally divorced from reality at some level, so we'll never get complete correspondence.
"This is all fine and dandy," you say, "But at the end of the day, our television sets turn on, our computers access the internet, and NASA still uses Newtonian physics to send very expensive robots to other planets even though general relativity and QM correspond better to reality." Yeah, for NASA's purposes, Newtonian physics corresponds enough, even though it doesn't correspond completely. The laws of physics don't have to fit perfectly into our "Laws of Physics" box for us to get some use out of that box. This is true of the principles behind television sets, computers, and the internet as well.
What I'm saying is that this is conventional truth, which is good enough for everyday applications. But conventional truth is fraught with ambiguity, and needs constant revision and maintenance to keep up with all the new things we discover every day and have to integrate into our conceptual frameworks. If we wanted something more than conventional truth, we would need to close the gap between language and the world to get metaphysical truth, which would at last dispense with the ambiguity and allow us to use linguistics to discover facts about things in the world since we would have complete correspondence (the boxes and labels would fit perfectly). This would be like using an understanding of a computer language to discover how things are going to play out when the program is run, simply by looking at the program - you can have a complete picture, so you can discover things "about the world (model)" just by analyzing the language on which it runs, because the "things" in the "world (model)" are modeled after the preconceived categories (and not vice versa). The real world doesn't run on language, though, because the real world is bottom-up.
In a top-down Universe, metaphysical truth is at least in principle accessible. In a bottom-up Universe, it's fundamentally inaccessible. What I'm saying is that as long as we continue to have this gap, this will be strong evidence that we live in a bottom-up Universe. If we did ever discover the "machine code" of the Universe, and could use it to discover things without empirically verifying them4 (as we can do with computer programs), then this would be almost clinching proof that we're in a top-down Universe. What this means for religion is that their top-down languages are failures: animals were not created according to any discernible "kinds," "detestable abominations" such as shrimp and lobster are fucking delicious, there's absolutely nothing that is demonstrably "sacred" about cows, and humanity has no principled and privileged status above the rest of the organisms on the planet. In short, this is not a knock against conventional truth, but rather a rough illustration of its constraints, and an attempt to show (conventionally, of course) what it says about the world that we have these constraints in the first place; it's a knock against the great body of codified superstitious nonsense that comprises most religions.
I hope this helps clarify my meaning.
Notes:1: There's clearly no way that an imaginary box in your head could literally contain a thing in the world, but that's not quite what I mean. There is a figurative way that an imaginary box in your head could contain such a thing, though: if you had complete understanding of a thing in the world, if your idea of that thing was "perfect," so to speak, then you would be able to discover true propositions about that thing in the world merely by analyzing your idea of it. That's what it means for a label to "really stick," or for a thing in the world to "really fit" into a box. Though it is logically possible for things to be set up this way, this idea is clearly absurd when applied to the real world, and demonstrates that whenever we write, talk, or even think "about" things in the world, we're really and truly writing/talking/thinking about our ideas of those things, and not the things themselves. The ding an sich is fundamentally inaccessible to us.
2: OK, yeah, there's propositional and procedural knowledge ("to know that X is the case" vs. "to know how X is done"). This is the difference between knowing what the rules of tennis are, and knowing how to deliver a good serve, as Ebonmuse points out in A Ghost in the Machine. However, procedural knowledge is more about reflexes and muscle memory and isn't really able to correspond to the world in the same way that propositions can. Hence its exclusion, I just wanted to take a moment to explain why.
3: To be fair, this is only contingently true of fallible beings like us who lack second-order knowledge (the knowledge qua "justified true belief" that you know something, or justified absolute certitude). But until or unless someone demonstrates omniscience or gnosis or some such happy horse-shit, we don't really need to concern ourselves with this qualifier.
4: Because we're fallible beings, we'd have to empirically verify these discoveries anyway. However, we would still be discovering facts with this machine code if it were genuine; what we would discover with empirical verification is merely that the machine code is genuine, thereby providing a justification for discoveries obtained with it. In other words, they're true already, we're just checking out the source.