huge numbers explained consisely

Here's a peek of what we'll be building up to (well, assuming $x$ is large enough): #\operatorname{Rayo}(x)>\operatorname{TREE}(x)>⋯>φ_x(x)>f_{ε_{ε_x}}(x)>f_{ε_x}(x)>f_{ε_1}(x)>f_{ε_0}(x)>f_{ω^{ω^ω}}(x)>f_{ω^ω}(x)>\newline f_{ω^2}(x)>x→x→x-1→x-1>f_{ω2}(x)>f_{ω+1}(x)>g_x>x↑^xx∼f_x(x)>x↑↑↑x>x↑↑x>x^{x^x}>x^x>x+1# Let's begin.
Going Up (or exploding bunnies, as I like to call it) We start with the humble exponentiation operator: #x^y=x↑y# Wait a second. What does the $↑$ symbol represent? Well, a single arrow means exponentiation, or $x^y$. Here's two arrows: #x↑↑y=\underset{y\ x\text{'s}}{\underbrace{x^{x^{x^{x^{x^⋰}}}}}}# Well, that's a lot bigger, isn't it? This is repeated exponentiation. Just like exponents, operations with any amount of arrows go from right to left. You might want a few examples, right? #2↑↑3=2^{2^2}=2^4=16\newline 2↑↑4=2^{2^{2^2}}=2^{16}=65536\newline 2↑↑5=2^{2^{2^{2^{2^2}}}}=2^{65536}\newline#

Oh, and this function is called "tetration," where we exponentiate $x$ many times. It's sometimes called a power tower as well because the numbers get HUGE quickly. For example, $4^{4^{4^4}}$ has $8.027304726⋯×10^{153}$ digits, which is way to big too even comprehend. And yes, we get to the point that we can talk about a popular number: googol. A googol contains $100$ zeroes ($10^{100}$), which clearly is huge. You can have a googolplex ($10^{10^{100}}$), a googolduplex ($10^{10^{10^{100}}}$), a googoltriplex ($10^{10^{10^{10^{100}}}}$), etc. Problem is, there's fewer atoms in the universe than the innocent googol. Yet this is downright miniscule compared to what's coming next. #x↑^3y=x↑↑↑y=\underset{y\ x\text{'s}}{\underbrace{x↑↑ \bigg(x↑↑\Big(x↑↑\big(x↑↑(x↑↑x)\big)\Big)\bigg)}}# That's three arrows, so, for example, $4↑↑↑3=4↑↑(4↑↑4)=4↑↑4^{4^{4^4}}≈\underset{8.072×10^{153}\ 4\text{'s}}{\underbrace{4^{4^{4^{4^{4^⋰}}}}}}$. You can't express it anymore with numbers or digits. You just can't. At this point, the universe can't last a fraction of a fraction of…anyway,
Pretty insane for such an innocent-looking formula, but let's try adding another arrow… #x↑^4y=x↑↑↑↑y=\underset{y\ x\text{'s}}{\underbrace{x↑↑↑x↑↑↑x↑↑↑x↑↑↑x↑↑↑⋯↑↑↑x}}# By the way, this is called Knuth's up-arrow notation, well, because there's arrows. From here on out, there's literally no way to describe the magnitude of these values. Digits don't work, nothing does! However, this is just the start. It's unimaginably huge. This is more atoms, energy in the universe, more quantum states, chess games…I'm getting carried away. #a↑^nb=a\ \underset{n\text{ arrows}}{\underbrace{↑↑↑↑↑⋯↑↑↑↑↑}}\ b=\underset{b\ a\text{'s}}{\underbrace{a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a↑^{n-1}a⋯}}# More recursion! The tiny $n-1$ means to do the operation with $n$ up arrows.

I like to think of these recursive functions as cute little explosive bunnies that leave smaller teennnnnnnny tinny explosive bunnies in their wake. I know, strange analogy, right?
GRAHAM'S NUMBER: The Elephant in the Article Fanatics of big numbers might also recognize this: Graham's Number! Remember the cute little up arrow? Yeah, this has everything to do with that. You can guess it's a pretty huge number! So: #g_1=3↑↑↑↑3\newline g_2=3↑^{g_1}3\newline g_3=3↑^{g_2}3\newline g_n=3↑^{g_{n-1}}3# So, $g_{64}=3↑^{g_{63}}3$. But think about what that means. For the next number in the function, we take the previous function as the number of arrows. This is incomprehensible, as our minds can't process what it means: we just see the threes and trust that it is huge. Furthermore, Graham's number is between $3→3→64→2$ and $3→3→65→2$. That should give you an idea of how crazy-huge-huge-huge Graham's Number and right arrows are. The first number in the sequence already has an unbelievable amount of digits: #g_1=3↑↑↑(3↑↑↑3)=3↑↑↑(3↑↑(3↑↑3))=3↑↑↑(3↑↑7,625,597,484,987)=3↑↑↑\underset{7,625,597,484,987\ 3\text{'s}}{\underbrace{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{3^{\text{and so on}}}}}}}}}}}}}}}}}}}}}}}}}}}}=\underset{3↑↑7,625,597,484,987\ 3\text{'s}}{\underbrace{3↑↑3↑↑3↑↑3↑↑⋯↑↑3}}# Note that you can also calculate $g_{65}$, $g_{66}$ and all other numbers in the sequence.
Horizontal Expansion? More Confusion! (Be aware: this section really isn't important. Seriously. If you're in a rush, feel free to stop procrastinating and just skip to the next question.) #a→b=a^b\newline a→b→c=a↑^cb# Wait, what does that little right arrow even mean? This is Conway's chained arrow notation, and although it looks pretty innocent right now, don't get fooled just yet. This notation is defined by itself, and…things get absolutely HUGE. Here's how it gets evaluated: #a⋯→x→y→1=a⋯→x→y\newline a⋯→x→y→1→z⋯=a⋯→x→y\newline a⋯→x→y→z=x→(a⋯→x→y-1→z)→z-1# R U okay?
That does seem like a mathful, but, eventually, if you use these properties, you can simplify these right arrows. You can try to do this, with, say, $3→3→2$, and realize that $a→b→c=a↑^cb$. These arrows are EXTREMELY big, so we use it to compare the size of these values.
Let's squeeze in a discussion about factorials here, which are practically invisible compared to what comes next. $n!=1×2×3×4×⋯×n$ isn't actually that big, and is less than $n^n$ for $n>1$. ($5×4×3×2×1$, which is $5!$, is clearly less than $5×5×5×5×5$, or $5^5$.) So, yeah, they aren't very big.
Back to the start (and then further) To begin, we start by…adding by one. #f_0(x)=x+1# Well, that's really boring. We can do this function $x$ times (yeah, doubling): #f_1(x)=f_0^x(x)=x+(1+1+1+⋯+1)=x+x=2x# Still quite innocent looking. Note that the superscript on the function doesn't exponentiate the result. Also, $f_0^3(x)=f_0(f_0(f_0(x)))$ instead of $(f_0(x))^3$. This recursion will be exploited to the fullest later. #f_2(x)=f_1^x(x)=x×2^x# Here we see exponentiation! Of course, we can extend this to ANY function in our sequence: #f_n(x)=f_{n-1}^x(x)=\underset{f_{n-1}\ x\text{ times}}{\underbrace{f_{n-1}(f_{n-1}(f_{n-1}(⋯f_{n-1}(x))))}}#
These functions, together, are called a "fast-growing hierarchy," and they grow really quickly, as you might be able to guess from the name.
Need a few examples to help explain this again? Let's try $f_4(2)$: #f_{n+1}(x)=f_n^x(x)=\underset{f_n\ x\text{ times}}{\underbrace{f_n(f_n(f_n(⋯f_n(x))))}}\newline f_4(2)=f_3(f_3(2))\newline f_2(x)=x×2^x\newline f_3(2)=f_2(f_2(2))=f_2(2^2×2)=f_2(6)=6×2^6=384\newline f_4(2)=f_3(384)=\underset{f_2\ 384\text{ times}}{\underbrace{f_2(f_2(f_2(⋯f_2(384))))}}=\text{a whole lot}# There's a reason it's called fast, after all. Interestingly, $f_n(x)>2↑^{n-1}x$, which isn't actually that great. But $f_ω$ improves the speed significantly. This function is also defined in terms of itself, so let's see it! #f_ω(x)=f_x(x)∼3→3→x\newline f_{ω+1}(x)=f_ω^x(x)# This kind of function goes "diagonally," as multiple values increase at the same time. It may not seem like much, but this is absolutely ginormous! For this function, the subscript (such as the $ω$ in $f_ω(x)$) is what makes the value so large, and, by going diagonally and iterating, the number grows stunningly quickly. Don't underestimate the power of this function. In case you want to know what the symbol is, it's a lowercase omega symbol, which is commonly discussed whenever infinity plops into mathematics. (Now you know one more useless fact!) If that makes you want to scream badly, just think of it as a unique variable, so, for example, $ω+3=ω+1+1+1$. And, because $f_n(x)>2↑^{n-1}x$, we can easily tell that $f_ω(x)>2↑^{x-1}x$. For example: #f_{ω+1}(5)=f_ω^5(5)=f_ω(f_ω(f_ω(f_ω(f_ω(5)))))=f_ω(f_ω(f_ω(f_ω(f_5(5)))))=f_ω(f_ω(f_ω(f_ω(f_4^5(5)))))=\text{something huge}# That's big enough already that $f_{ω+1}(64)>g_{64}$ ($g_{64}$ being Graham's Number). Even crazier, $f_{ω+1}(x)>f_{x↑^xx}(x)$, and $f_{ω+1}(x)>2→x→x→2$ given even small values. At this rate, we might be able to understand it from a purely mathematical perspective, but we can't truly grasp the size of these functions and numbers. Now let's look at $f_{ω+2}$: #f_{ω+2}(x)=\underset{f_{ω+1}\ x\text{ times}}{\underbrace{f_{ω+1}(f_{ω+1}(f_{ω+1}(f_{ω+1}(x))))}}# We can add bigger values and also double $ω$. #f_{ω+n}(x)=f_{ω+(n-1)}^x(x)∼3→3→(x+1)→(n+1)\newline f_{ω2}(x)=f_{ω+x}(x)# Why is the number two after the $ω$? Because the inventors of these numbers decided to. (Hey, you'll get used to it after a while!) This, again, explodes the function. For example, $f_{ω2}(4)=f_{ω+4}(4)$. At this rate, $f_{ω+n}(x)>x→x→(x-1)→(n-1)$, which is truly insane. Why don't we use faster-growing values, like, say, $→$ chains? Welllllllllllll, this function is sweet and simple, and also starts with the simplest operation you can think of: adding by one. It's easy-to-understand recursion and simplicity is what makes it so popular. Oh, it's also going to get way bigger than Conway chains, so...
Oh, and if you've been staring at a screen for a while, why don't you take a little break? The next part has a whole lot math (and large numbers)!

You're back? Hooray! We can now multiply by other values: #f_{ωn}(x)=f_{ω(n-1)+x}(x)=f_{ω(n-1)+(x-1)}^x(x)\newline f_{ω^2}(x)=f_{ωx}(x)=f_{ω(x-1)+x}(x)∼\underset{x+2}{\underbrace{3→3→⋯→3→3}}\newline f_{ω^2}(3)=f_{ω3}(3)=f_{ω2+3}(3)=f_{ω2+2}^3(3)\newline f_{ω^2+1}(3)=f_{ω3+1}(3)=f_{ω2+4}(3)=f_{ω2+3}^3(3)=\text{really big}# A reasonable next step would be to exponentiate even more, going from $ω^2$ to $ω^x$, and then to $ω^ω$. #f_{ω^{n+1}}(x)=f_{n×ω^n}(x)=f_{ω^n(x-1)+x}(x)\newline f_{ω^ω}(x)=f_{ω^x}(x)# At this rate, this function grows faster than Conway chains! (Imagine not reading it.) You might be wondering how we know what part to simplify. We won't go into detail, but, usually, simplifying values near the end of the equation gets things down. Eventually, you'll get an unimaginable formula, which could technically be expanded, but the universe would collapse before you even got 1% there. Oh, also, the function ($f$) isn't what makes the value huge. As long as it's somewhat fast-growing, the weird $ω$ symbol does the rest. If you're wondering how values like $f_{ω^{ω^2}}$ are calculated, well, it expands like this: #f_{ω^{ω^2}}(x)=f_{ω^{ωx}}(x)=f_{ω^{ω×x}}(x)=f_{ω^{ω(x-1)+ω}}(x)=f_{ω^{ω(x-1)+x-1}×ω}(x)=f_{ω^{ω(x-1)+x-1}×x}(x)=f_{ω^{ω(x-1)+x-1}×(x-1)+ω^{ω(x-1)+x-1}}(x)\text{ and so on and on}# at which point expansion continues until the expression finally has a $+1$ dangling at the end. (This also means that large values soon become painfully hard to expand!)
Of course, can exponentiate more with another funny symbol: $ε$. (Don't ask how they decided which symbols meant what, because I DON'T KNOW!) #ε_0[x]=f_{\underset{x\text{ times}}{\underbrace{ω^{ω^{ω^⋰}}}}}(x)\newline f_{ε_0}(3)=f_{ω^{ω^ω}}(3)=f_{ω^{ω^3}}(3)# This number is HUGE. (You'll hear that from me a lot.) In fact, it's bigger than Conway chains (right arrows), which we realized were stunningly gigantic! As we're repeatedly exponentiating, we can use some arrows again, so $f_{ε_0}(x)=f_{ω↑↑ω}(x)$. Remember how we calculate $ω$ weirdly? Well we can use the $[\ ]$ symbols, so, for example, $ω^2[n]=ωn$, just how we did it in the fast-growing hierarchy. We can also extend this to $ε_0$, another wonky infinity: #ε_0[n]=ω↑↑n# We can also define our previous infinities better: #ω^{x+1}[n]=n×ω^x\newline ω^x[n]=ω^{x[n]}# AAAAAAAAAAAAAAAAAAAAHHHHHHHH!!!! (Hopefully that cleared the confusion of what we were doing up.) The variable $n$ can be inputted through the fast-growing hierarchy (or any other function), so $f_{ε_2}(5)=f_{ω^{ω^{ω^{ω^{ε_1+1}}}}}(5)$. $ε_1$ then gets evaluated later, and so on.

Instead of one infinity, we have an infinite amount of infinities now, which is…strange. $ε_0+1>ε_0$, but $ω^{ε_0}=ε_0$, as $ε_0$ is defined with an infinite amount of $ω$'s: #ε_0=ω^{ω^{ω^{ω^{ω^{ω^{ω^{ω^{ω^⋰}}}}}}}}# Adding another $ω$ doesn't change the value. However, just like $ω^ω≠ω$, $ε_0^{ε_0}$ is also not equal to $ε_0$. We're now officially in the completely warped realm of infinity, but let's go deeper.
(Specifically, these numbers are ordinals, or numbers that tell the position of something. $ℵ_0$, called "aleph null," is instead a cardinal, and is named after the first letter of the Hebrew alphabet. Cardinals count the amount of something, so adding or subtracting doesn't change the value. Even though we're using these special symbols, the resulting value is still finite because we can eventually replace $ω$.)

Wow, that was a huge wall of text! Anyway, we can also define $ε_1$, $ε_2$, and $ε_k$: #ε_1[n]=\underset{n\ ω\text{'s}}{\underbrace{ω^{ω^{ω^{ω^{ω^{ε_0+1}}}}}}}\newline ε_2[n]=\underset{n\ ω\text{'s}}{\underbrace{ω^{ω^{ω^{ω^{ω^{ε_1+1}}}}}}}\newline ⋯\newline ε_{k+1}[n]=\underset{n\ ω\text{'s}}{\underbrace{ω^{ω^{ω^{ω^{ω^{ε_{k}+1}}}}}}}# Confusing? The properties of the "smaller" values also apply, so $ε_n$ still equals $ω^{ε_n}$. (Also, you have to input a nonnegative integer for each of the values, so no $ε_{-1}$ or $ε_{1.23456789}$.) All right. You can take a break from these functions and go look at a few trees if you want.

TREE(3): I wasn't joking! Well, there's another sizable contestant: $\operatorname{TREE}(3)$. wow high quality trees
This one is a little more complicated. Okay, a lot more complicated. For $\operatorname{TREE}(x)$, you start out with $x$ seeds. Then, you play the a game. Each step, you can have no more seeds (or nodes) than the number of steps. Also, no tree from any previous steps may be part of the result. Wait, what?
You still all right?
Yeah, I agree: that's really confusing. Essentially, picture a bunch of dots. Each dot may be connected to some other dots. Also, the tree from a previous step cannot be a part of the new tree (we'll discuss this in a second). For example, a game with a single seed lasts one round, because the second step will contain two of the first seed, which isn't allowed. Notice how for $N=2$, the second step starts with two dots. Then, in the third step, it's perfectly fine to have a single dot for that color, as the rule doesn't apply backwards. We're looking for the longest game that can be played, and here are the results: #\operatorname{TREE}(1)=1\newline \operatorname{TREE}(2)=3\newline \operatorname{TREE}(3)≫g_{64}# (The $≫$ symbol, by the way, means "much greater than." Although, with such huge numbers, this symbol packs a much bigger punch than just a few magnitudes.) For more than three seeds, the result is a lot larger, but we don't even have an estimate for $\operatorname{TREE}(3)$: in fact, we have no knowledge of just how big it is, but we do have some huge approximations. IMPORTANT NOTE: This number is MUCH LARGER than the numbers we've looked at before, and even many numbers after!

All right. If you have a tree that is inf-embeddable, which is explained below (hover over the image to zoom in):
You would expect it to be infinite, but for all values of the function, it is finite! This value has been PROVEN to be way more than any of the numbers we have discussed so far. Any approximation currently is actually an extremely loose lower bound (meaning the actual value is much, much higher). This number is too big to be represented with either of these arrows! That simply blows away the numbers we've discussed currently. But we can go further. Well, wait, there's one thing. Some rather recent proofs have shown that this value will actually exceed $f_{φ(1@ω)}(\operatorname{tree}(\operatorname{tree}(3)))$, and even this is actually much smaller than the actual value. You see, some person that was really into this function ended up devising a complex set of estimations. Then, they calculated the end result, which was that $f_{φ(1@ω)}(\operatorname{tree}(\operatorname{tree}(3)))$. You don't need to worry about what that is, but there was a proof that $\operatorname{TREE}(3)≫\operatorname{tree}^{\operatorname{tree}(7)^{\operatorname{tree}(7)^{\operatorname{tree}(7)^{\operatorname{tree}^8(7)}}}}(7)$, which is part of the reason this proof worked in the first place! $φ(1@ω)$ will be explained later, but $\operatorname{tree}(\operatorname{tree}(3))$ is still rather big.
fast, Fast, and FAST! (and more symbols) Next up: the Veblen function, represented with the lowercase phi symbol. #φ_0(0)=1\newline φ_0(k)=ω^k\newline φ_1(k)=ε_k\newline φ_{x+1}(0)[n]=φ_x^n(0)\newline φ_{x+1}(y+1)[n]=φ_x^n(φ_{x+1}(y)+1)\newline# These functions grow EXTREMELY quickly. For example, well: #φ_2(0)[n]=φ_1^n(0)=ζ_0[n]=\underset{n\ ε\text{'s}}{\underbrace{ε_{ε_{ε_{ε_{ε_{ε_{ε_{⋯ε_0[n]}}}}}}}}}# Here's a weird bonus: $φ_0(3)=η_0=ζ_{ζ_{ζ_{ζ_{ζ_{⋯ζ_0[n]}}}}}$ and is defined in the same way that $ζ_0$ is. So $η_0[3]=ζ_{ζ_{ζ_0[3]}}=ζ_{ζ_{ε_{ε_{ε_0[3]}}}}=ζ_{ζ_{ε_{ε_{\left(ω^{ω^{ω[3]}}\right)}}}}=ζ_{ζ_{ε_{ε_{(ω^{ω^3})}}}}$ (and so on), for example. Notice how $ε$, $ζ$, and $η$ all are much more powerful than their predecessor? The $φ$ function just takes advantage of that by repeating it.
Now, let's find the secret recipe that made our $f$'s so powerful. We first used recursion, then exploited iteration, and the special symbols allowed for diagonalization. We'll continue in the next section!
And with that cliffhanger, we proceed to observe some weird beavers.
The UNCOMPUTABLE Busy Beaver iTuringPhoneiTuringPhoneiTuringPhoneiTuringPhoneiTuringPhoneiTuringPhone… Apparently, people can't decide what it should look like, so you may see $\operatorname{BB}(n)$, $\operatorname{B}(n)$, $∑(n)$, or even $\operatorname{BB}\text{-n}$. This number involves a Turing machine: a machine that produces ones and zeroes.
Think of the busy beaver as a weird game: you're in an infinitely long hallway, with infinitely many rooms at tbe first floor of the hotel. You have a beaver that consults his TuringPhone™ⒼⓄⓄⒹ for instructions. For example, the instructions could be these:
Start with guide 1.
Guide 1: If the room is dark, then turn on the light and go to the room to the left.
If the room is lit, then keep the light on and follow Guide 2 instead.
Guide 2: If the room is lit, turn off the light and move to the room to the right.
Otherwise, if the room is dark, then stop.

The "stop" part of the instruction allows the robot to finish eventually. If the instructions don't tell you to switch guides, then keep using that guide after you move. After the beaver stops, it counts the number of lit rooms and the busy beaver reports the amount. However, to calculate $\operatorname{BB}(n)$, it returns the most amount of (finite) lit rooms out of all the possibilities. (If part of an instruction is stopping, then you can't also turn the light on.) It turns out, for $\operatorname{BB}(3)$, these rules win:
Start with Guide 1.
Guide 1: If the room is dark, turn on the light and go to the room to the right. Then use Guide 2 instead.
If the room is lit, then move to the left and consult Guide 3 instead.
Guide 2: If the room is dark, turn it on and move to the left and use Guide 1 instead.
If the room is lit, turn off the light and move to the room to the right. Guide 3: If the room is dark, turn it on and move to the left. The, switch to Guide 2.
If the room is lit, then stop.

In fact, there aren't actually many possible Turing machines: for $n$ symbols, you get $(4n+4)^{2n}$ valid ones, which is much smaller than simple tetration ($↑↑$). However, this function is actually uncomputable, so you can't guarantee that the value you believe is the longest game is actually the longest. This all has to do with the halting problem: can you determine if a Turing machine halts with another Turing machine? It turns out that it is impossible, so it's impossible to determine what the result is larger values of the Busy Beaver. Sorry. (However, we can still revel in the fact that we do have ginormous lower bounds, and the fact t.) $\operatorname{BB}(1)=1$, and $\operatorname{BB}(2)=4$. It can be proved (with difficulty) that $\operatorname{BB}(3)=6$ and $\operatorname{BB}(4)=13$. Also, $\operatorname{BB}(5)$ equals $47,176,870$ with the newest proofs, $\operatorname{BB}(6)>10↑↑15$, $\operatorname{BB}(9)>10↑↑28$, and finally, $\operatorname{BB}(11)>3↑↑↑720618962331271$. $\operatorname{BB}(18)$ is proven to be bigger than Graham's Number, and $\operatorname{BB}(85)>f_{ε_0}(1907)$. (Apparently, there's a group of people who are obsessed with these numbers, and larger numbers in general, and they've found out these values!) THE MORE YOU KNOW
The Beginnings of Insanity and SSCG What does SSCG mean? SSCG means "SubCubic Graph Number," which, sadly, doesn't help much. As with $\operatorname{TREE}(3)$, it involves graphs. Here's the first few values: #\operatorname{SSCG}(0)=2\newline \operatorname{SSCG}(1)=5\newline \operatorname{SSCG}(2)≈3.2417⋯×10^{35775080127201286522908640066}\newline \operatorname{SSCG}(3)≫\operatorname{TREE}^{\operatorname{TREE}(3)}(3)\newline \operatorname{TREE}^{\operatorname{TREE}(3)}(3)=\underset{\operatorname{TREE}(3)}{\underbrace{\operatorname{TREE}(\operatorname{TREE}(\operatorname{TREE}(\operatorname{TREE}(\operatorname{TREE}(⋯(\operatorname{TREE}(3)))))))}}# That doesn't help either, so how does it work?
First, we'll start with the SCG function instead (no, it isn't misspelled). Well, like $\operatorname{TREE}$, we find the length of the longest sequence of valid graphs. The first graph is called $G_1$, the second one is $G_2$, and so on. Graph $i$ ($G_i$) can have up to $i+x$ (where $x$ is the value in $\operatorname{SCG}(x)$) vertices (or dots). Here's an example with $\operatorname{SCG}(0)$: If we don't allow loops (like $G_1$ in the example above) or points with multiple lines connecting them, then we get $\operatorname{SSCG}$, another function. As you'd expect, this number is smaller than $\operatorname{SCG}$ but is still more popular! Also, for all graphs before the new graph, no graph may be a graph minor of any previous graphs. A graph minor is created after removing points, removing lines, and finally, collapsing (contracting) lines. Imagine collapsing a line as moving a point to another point, while keeping the line on the point. Once you finish moving, delete the duplicate point but keep the line at the same position. (Remember, for $\operatorname{SCG}$, you can have multiple lines connect to the same points!) That should give you a good idea of how it works!
Oh, both SSCG and SCG grow similarly quickly, as $\operatorname{SSCG}(x)<\operatorname{SCG}(x)≤\operatorname{SSCG}(4x+3)$.
Super-Duper Numbers Remember how the Busy Beaver is uncomputable (maybe look at the Busy Beaver section if you've forgotten)? Well, let's say that there's a manager for the first floor who somehow solved the halting problem, and knows the result! Now, the manager and the beaver beaver contact each other through TuringPhones®. Now, there are three new free states introduced: "count," "halted," and "eternal." Now, whenever the beaver goes into the "count" state, the manager counts the number of lit rooms (call that $n$) and simulates the $n$th Turing machine. (How these machines are sorted isn't important, but there's a one-to-one mapping so that each machine is somewhere.)
If it halts, it puts the robot into the "halted" state. Otherwise, the robot gets put into the "eternal" state. This may seem weird, but this new number is much bigger than the Busy Beaver number, but sadly is again uncomputable! Let's call this a Level 1 Turing machine.

Of course, managers have managers too, and you could have a Level 2 Turing machine, where you get three extra states, simulating the $n$th Level 1 Turing machine, and so on, and so on, and so on. But that's not too important to discuss, so we'll just move on from this little bit of trivia, shall we?

All right, fine. I admit it: I made some of it up so that it isn't as boring to read.
Back from the Break? We left you off exploring how our  fast-growing hierarchy was so successful. Now, we apply these principles with the $ψ$ symbol. (The inventors of these ideas sure loved Greek symbols, huh?) #ψ(0)=ε_0# That was pretty simple, although the definition seems strange. For finite values $x$, $ψ(x+1)[n]=ψ(x)↑↑n$. For example, $ψ(1)[4]=ε_0^{ε_0^{ε_0^{ε_0[4]}}}=ε_0^{ε_0^{ε_0^{ω^{ω^{ω^ω[4]}}}}}$ and $ψ(2)[4]=ψ(1)^{ψ(1)^{ψ(1)^{ψ(1)[4]}}}$. Later on, this function will get more and more powerful as we introduce new symbols. We can use a special CAPITAL omega symbol (sometimes written as $ω_1$) for iteration: #ψ(Ω)[n]=ψ^n(0)\newline ψ(Ω+1)[n]=φ_1(φ_2(0)+1)=ψ(Ω)↑↑n=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω)^{ψ(Ω)^{ψ(Ω)^{ψ(Ω)^{ψ(Ω)}}}}}}\newline ψ(Ω+(n+1))[n]=ψ(Ω+n)↑↑n=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω+n)^{ψ(Ω+n)^{ψ(Ω+n)^{ψ(Ω+n)^{ψ(Ω+n)}}}}}}\newline# This function alone won't get us very far. Notably, it does gives us an idea: just like the $φ$ functions, we iterate the function, each time iterating a slightly smaller value. Think about how infinities are special. Well then, the $Ω$ infinity (yes, it's an infinity) is special-special. Confusing? Normally, $Ω$ is uncountable, which means that it's INFINITELY bigger than the infinities we discussed previously; however, these things are actually called meta-ordinals. Meta-ordinals can be considered a symbol that "prompts" an operation, rather than actually representing, in this case, absolute infinity. #ψ(Ω+Ω)[n]≠ψ(Ω+Ω[n])# Instead, we use iteration. Below are the first few values: #ψ(Ω2)[2]=ψ(Ω+ψ(0))\newline ψ(Ω2)[3]=ψ(Ω+ψ(Ω+ψ(0)))\newline ψ(Ω2)[4]=ψ(Ω+ψ(Ω+ψ(Ω+ψ(0))))\newline ψ(Ω2)[5]=ψ(Ω+ψ(Ω+ψ(Ω+ψ(Ω+ψ(0)))))# (Remember, $Ω2=Ω×2$.) Before we plow through the functions, we will quickly define $f(a,b)$ to be equal to $f_a(b)$, as it gets hard to evaluate the function if we don't do that. Let's try $f(ψ(Ω2),2)$ (or $f_{ψ(Ω2)}(2)$): With functions growing this fast, $f_{\text{an infinity}+1}(x)$ will still become $f_{\text{an infinity}}^x(x)$, which is INSANELY HUGE! Notably, $f_{\text{anything}}(1)$ eventually become $f_{\text{a large number}}(1)$, then $f_0(1)$, which is…two. However, with just a small increase… #f_{ε_0}(2)=f_{ω^ω}(2)=f_{ω2}(2)=f_{ω+2}(2)=f_{ω+1}(f_{ω+1}(2))=f_{ω+1}(f_ω(f_ω(2)))=f_{ω+1}(f_ω(f_2(2)))=f_{ω+1}(f_ω(8))=f_{ω+1}(f_8(8))\newline =f_ω^{f_8(8)}(f_8(8))>f_ω^{2↑↑↑↑↑↑↑8}(2↑↑↑↑↑↑↑8)=f_ω^{2↑^78-1}(f_ω(2↑^78))=f_ω^{2↑^78-1}(f_{2↑^78}(2↑^78))>f_ω^{2↑^78-1}(2↑^{2↑^78-1}(2↑^78))=\newline f_ω^{2↑^78-2}(f_ω(2↑^{2↑^78-1}(2↑^78)))=f_ω^{2↑^78-2}(f_{2↑^{2↑^78-1}(2↑^78)}(2↑^{2↑^78-1}(2↑^78)))>f_ω^{2↑^78-2}(2↑{2↑^{2↑^78-1}(2↑^78)-1}(2↑^{2↑^78-1}(2↑^78)))# Clearly enormous, and also showing you that even $f_{ω+2}(x)$ is a force to be reckoned with. Back to the expansion of $f_{ψ(Ω2)}(2)$: #f(ψ(Ω2),2)\newline =f(ψ(Ω+ψ(0)[2]),2)\newline =f(ψ(Ω+ω^ω),2)\newline =f(ψ(Ω+ω2),2)\newline =f(ψ(Ω+ω+2),2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω+1)},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+ω)}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+2)}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω+1)}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(Ω)}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ψ(0))}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω^ω)}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω2)}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+2)}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω+1)}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(ω)}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(2)}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(1)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ψ(0)}}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω2}}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+2}}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}ψ(0)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω^ω)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+2)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^{ω+1}}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ωψ(0)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω^ω)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+2)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)^ω}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)^2}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ψ(0))}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω^ω)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+2)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+ψ(0)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+2)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(1)}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(0)^{ψ(0)}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(0)^{ω^ω}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(0)^{ω+2}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{\left(ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(0)^{ω+1}\right)×ψ(0)}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{\left(ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(0)^{ω+1}\right)×{ω^ω}}}}}}},2)\newline =f(ψ(Ω+ω+1)^{ψ(Ω+ω)^{ψ(Ω+1)^{ψ(Ω)^{ψ(ω+1)^{ψ(ω)^{\left(ψ(1)^{ψ(0)^{ω+1}(ω+1)+ψ(0)^ω(ω+1)+ψ(0)(ω+1)+(ω+1)}×ψ(0)^{ω+1}\right)×{(ω+2)}}}}}}},2)# And it's like that for a long time. Slowly but surely, this function is being evaluated. The function expands to something humongous, and the symbols start piling up, eventually resulting in something incomprehensible! The next step would be to multiply $Ω$ by something larger: #ψ(Ω3)[n]=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω2+ψ(Ω2+ψ(Ω2+⋯(Ω2+ψ(0)))))}}\newline ψ(Ω4)[n]=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω3+ψ(Ω3+ψ(Ω3+⋯(Ω3+ψ(0)))))}}\newline ⋯\newline ψ(Ωx)[n]=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω(x-1)+ψ(Ω(x-1)+ψ(Ω(x-1)+⋯(Ω(x-1)+ψ(0)))))}}# These will all eventually become a huge number, as we can eventually get it to the form of $ψ(Ω+n)$, which allows us to solve a small part of it, so, yes, it's computable. The reasonable next step would be to square ($x×x$) and cube ($x×x×x$) the $Ω$: #ψ(Ω^2)[n]=ψ(Ω×Ω)[n]=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω×ψ(Ω×ψ(Ω×⋯(ψ(Ω×ψ(0))))))}}\newline ψ(Ω^3)[n]=ψ(Ω×Ω×Ω)[n]=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω×Ω×ψ(Ω×Ω×ψ(Ω×Ω×⋯(ψ(Ω×Ω×ψ(0))))))}}# Remember how all the symbols were infinities? Well, the reason they don't return an infinite result is because of limits. (Welcome to the wonderful world of advanced math!) It's represented with $\lim$ in math: #\lim_nε_0=\lim_nω^{ω^{ω^{ω^⋰}}}=\underset{n\ ω\text{'s}}{\underbrace{ω^{ω^{ω^{ω^⋰}}}}}# So, instead of calculating $ω^{ω^{ω^{ω^⋰}}}$ infinitely, we stop exponentiating after we have a total of $n$ $ω$'s. (We use the $[n]$ to indicate that the limit value is $n$ so we don't write $\lim$ every single time. Also, note that these limits may not be valid for other functions that aren't the fast-growing hierarchy.) All right. Let's exponentiate to some infinities: #ψ(Ω^x)=φ_x(0)=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω^{x-1}×ψ(Ω^{x-1}×ψ(Ω^{x-1}×⋯ψ(Ω^{x-1}×ψ(Ω^{x-1}×ψ(0))))))}}\newline ψ(Ω^ω)[n]=φ_n(0)=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω^{n-1}×ψ(Ω^{n-1}×ψ(Ω^{n-1}×⋯ψ(Ω^{n-1}×ψ(Ω^{n-1}×ψ(0))))))}}# Be aware that some people define values smaller than $ψ(Ω^ω)$ differently; this isn't too important, as the functions catch up at that point anyway. (Catching up means that the values are "close enough" by googology standards.) Next up is a pretty big landmark: $Ω^Ω$. #ψ(Ω^Ω)[n]=\underset{n\ ψ\text{'s}}{\underbrace{ψ(Ω^{ψ(Ω^{ψ(Ω^{ψ(Ω^{ψ(Ω^{⋰^{ψ(Ω^{ψ(0)})}})})})})})}}=\underset{n\ φ\text{'s}}{\underbrace{φ_{φ_{φ_{φ_{φ_{φ_⋯φ_{φ(0)}(0)}(0)}(0)}(0)}(0)}(0)}}=Γ_0[n]# This upside-down L symbol represents a Feferman-Schütte Ordinal, and, again, is Greek (an uppercase gamma this time). We've now surpassed the simple $φ$ stuff, but we can go further even more. But before we do, just make a mental note that $φ_x(y)=φ(x,y)$. (We'll deal with more inputs below!) #Γ_{k+1}[n]=\underset{n\ φ\text{'s}}{\underbrace{φ_{φ_{φ_{φ_{φ_{φ_⋯φ_{Γ_k+1}(0)}(0)}(0)}(0)}(0)}(0)}}\newline φ(1,1,0)[n]=\underset{n\ Γ\text{'s}}{\underbrace{Γ_{Γ_{Γ⋯Γ_0}}}}=φ(1,0,φ(1,0,⋯φ(1,0,0)+1⋯))\newline φ(1,1,1)[n]=\underset{n\ φ\text{'s}}{\underbrace{φ(1,0,φ(1,0,⋯φ(1,1,0)+1⋯))}}\newline φ(1,1,n+1)[n]=\underset{n\ φ\text{'s}}{\underbrace{φ(1,0,φ(1,0,⋯φ(1,1,n)+1⋯))}}# You might be wondering what $φ(1,2,0)$ is. #φ(1,2,0)=\underset{n\ φ\text{'s}}{\underbrace{φ(1,1,φ(1,1,⋯φ(1,1,0)+1⋯))}}# Need an example? $φ(1,1,0)[3]$ would equal $Γ_{Γ_{Γ_0+1}}$, and $φ(1,1,1)[4]$ would equal $φ\Biggl(1,1,φ\biggl(1,1,φ\Bigl(1,1,φ\bigl(1,1,0\bigr)+1\Bigr)\biggr)\Biggr)$. We can go further... #φ(1,a+1,0)=\underset{n\ φ\text{'s}}{\underbrace{φ(1,a,φ(1,a,⋯φ(1,a,0)+1⋯))}}\newline φ(1,a,b+1)=\underset{n\ φ\text{'s}}{\underbrace{φ(1,a,φ(1,a,⋯φ(1,a,b)+1⋯))}}\newline φ(1,0,0,0)=\underset{n\ φ\text{'s}}{\underbrace{φ(φ(φ(0,0,0)⋯,0,0),0,0)}}=\underset{n\ φ\text{'s}}{\underbrace{φ(φ(1⋯,0,0),0,0)}}# You can imagine how the pattern continues, right? This type of recusion is quite powerful, so our next step would be $φ(1,0,0,0,0)=\underset{n\ φ\text{'s}}{\underbrace{φ(φ(φ(0,0,0,0)⋯,0,0,0),0,0,0)}}$, $φ(1,0,0,0,0,0)$, and so on. Until we get to $φ(1@n)$. #φ(1@n)=\underset{n\ 0\text{'s}}{\underbrace{φ(1,0,0⋯0,0,0)}}\newline φ(a@b)=\underset{b\ 0\text{'s}}{\underbrace{φ(a,0,0⋯0,0,0)}}# This can be extended much further: #φ(1@ω)[k]=\underset{k\ 0\text{'s}}{\underbrace{φ(1,0,0⋯0,0,0)}}=ψ(Ω^{Ω^ω})[k]\newline φ(1@(10@2))[k]=\underset{k\ φ\text{'s}}{\underbrace{φ(1@(4,φ(1@(9,φ(1@(9,φ(1@(9,⋯,0,0)),0,0)),0,0))⋯,0,0))}}\newline φ(1@((a+1)@2))[k]=\underset{k\ φ\text{'s}}{\underbrace{φ(1@(4,φ(1@(a,φ(1@(a,φ(1@(a,⋯,0,0)),0,0)),0,0))⋯,0,0))}}\newline φ(1@((a+1)@3))[k]=\underset{k\ φ\text{'s}}{\underbrace{φ(1@(4,φ(1@(a,φ(1@(a,φ(1@(a,⋯,0,0,0)),0,0,0)),0,0,0))⋯,0,0,0))}}\newline φ(n@((a+1)@3))[k]=\underset{k\ φ\text{'s}}{\underbrace{φ(n@(4,φ(n@(a,φ(n@(a,φ(n@(a,⋯,0,0,0)),0,0,0)),0,0,0))⋯,0,0,0))}}\newline φ(n@((a+1)@b))[k]=\underset{k\ φ\text{'s with each one containing b zeroes inside}}{\underbrace{φ(n@(4,φ(n@(a,φ(n@(a,φ(n@(a,⋯,0,0⋯0)),0,0⋯0)),0,0⋯0))⋯,0,0⋯0))}}\newline# Googologists will call $φ(1@ω)$, or $ψ(Ω^{Ω^ω})$ (these two are the same; weird, right?), the Small Veblen Ordinal, or SVO. $ψ(Ω^{Ω^Ω})$, or $φ(1@(1,0))$, is equal the Large Veblen Ordinal, or LVO. If that's strange, here are a few more examples: #φ(3@(2@0))[k]=φ(3@2)[k]=φ(3,0,0)[k]\newline φ(1@(5@2))[4]=φ(1@φ(5@2))[4]=φ(1@(4,φ(1@(4,φ(1@(4,φ(1@(4,0,0)),0)),0)),0))\newline φ(5@(10@2))[4]=φ(5@(9,φ(5@(9,φ(5@(9,φ(5@(4,0,0)),0,0)),0,0)),0,0))\newline φ(5@(10@4))[3]=φ(5@(9,φ(5@(9,φ(5@(9,0,0,0,0)),0,0,0,0)),0,0,0,0))\newline φ(x@((y+1)@4))[3]=φ(x@(y,φ(x@(y,φ(x@(y,0,0,0,0)),0,0,0,0)),0,0,0,0))\newline φ(x@((y+2)@4))[2]=φ(x@(y,φ(x@(y+1,0,0,0,0)),0,0,0,0))[2]=φ\biggr(x@\Bigl(y,φ\bigl(x@(y,φ(x@(y+1,0,0,0,0)),0,0,0,0)\bigr),0,0,0,0\Bigr)\biggr)[2]=⋯\newline# Note that this can also be extended to larger values, such as $φ(1@(1@(1@(1,0))))$, relatively easily, so we won't get into that too far (as that's just an extension of what we have done already) and we'll just skip all of that. The next huge step, in that case, would be to repeat that. #ψ(ε_{Ω+1})[k]=\underset{k\ Ω\text{'s}}{\underbrace{ψ(Ω^{Ω^{Ω^{Ω^{Ω^⋰}}}})}}[k]=\underset{k-2\ @\text{'s}}{\underbrace{φ(1@(1@(1@(1@(1@(1@(⋯)))))))}}[k]# This value is the Bachmann-Howard ordinal, or BHO. (It's also famous.) Think about how large $f_{ψ(ε_{Ω+1})+1}(3)$ would be...
Those numbers sure are huge! These functions are hurting my head though (and more likely than not, yours as well), so we'll stop talking about them for now. @@@@@@@@@@@@@@@@@@@@@@@@⋯@@@@@@@@@@@@@@@@@@@@@@@@
The Beastly Hydra: Larger than Ever This one's a little confusing. It's also not that important. This also has to do with trees, but this time one called a hydra. A hydra is an infinite tree, and the game involves a hero who cuts off branches of the tree. Your hydra, or monster in this case, has a large amount of heads. However, the monster can have other heads attached to it! (If you're having a hard time, just imagine the TREE diagram, but with only one color.)
The hero gets to chop off one of the heads of the hydra that are also not connected to any other heads. The monster can have only one body, which means that the only heads that don't regenerate are the ones that are connected directly to the body. However, if you cut off one of the other heads (that aren't attached to any other heads), then we clone part of the hydra.
How? Well, the hydra is really a giant graph. So, remember how you can only cut off heads not attached to other heads? That means that we can take the head that our newly chopped off head is attached to. (When I refer to a head from now on in the sentence, it's NOT the newly chopped off one.) Then, we make $x$ new copies of everything above the head AFTER the head-chopping and attach them to that head.

And the result grows pretty quickly. The considerable amount of heads may seem like it will never end, but in reality, EVERY SINGLE HYDRA has an ending. (Which means that you could define a large number for the number of heads, or $x$, that regenerate every chop, and get a good hydra, it would generate a HUGE number!)
This function doesn't have a lot of information about it, so take this information with a few (or many) grains of salt: This function is believed to be bigger than TREE and even SSCG (more popular than SCG, by the way). So yeah. This math-inducing hydra is a huge mess, and we don't understand it fully! You may have wondered: why aren't TREE, SSCG, and HG in the sneak peek? Well, it's because we don't know where these functions lie in there. Upper bounds have been proven, but they are pretty large, even for googology standards.
Setting up Sets and a Googol Here's a fun fact: Rayo's number was part of a big number duel at MIT. The goal was to write down the biggest number that you could think of, and each new number had to be unique (so you couldn't just add "plus one" to someone else's answer). Also, you have to be specific, so you couldn't say something like "the biggest number you could think of plus two." Reasonably, the stands were crowded, and we present to you the final result (by Rayo himself, of course):
∀R { { ∀[ψ], s: R([ψ],t) ↔ (([ψ] = "xi ∈ xj" ∧ t(xi) ∈ t(xj)) ∨ ([ψ] = "xi = xj" ∧ t(xi) = (tj)) ∨ ([ψ] = "(¬θ)" ∧ ¬R([θ], t)) ∨ ([ψ] = "(θ∧ξ)" ∧ R([θ], t) ∧ R([ξ], t)) ∨ ([ψ] = "∃xi(θ)" ∧ ∃t’: R([θ], t’)) (where t’ is a copy of t with xi changed) )} ⇒ R([ϕ],s) }
The smallest number bigger than every finite number $m$ with the following property: there is a formula $φ(x_1)$ in the language of first-order set-theory (as presented in the definition of "$\operatorname{Sat}$") with less than a googol symbols and $x_1$ as its only free variable such that: (a) there is a variable assignment $s$ assigning $m$ to $x_1$ such that $\operatorname{Sat}([φ(x_1)],s)$, and (b) for any variable assignment $t$, if $\operatorname{Sat}([φ(x_1)],t)$, then $t$ assigns $m$ to $x_1$.

Huh? Yeah, I agree.

Let's make that more understandable. It also means "the smallest number bigger than any finite value obtainable by using up to $n$ symbols in first-order set theory." Now, WHAT'S FIRST ORDER SET THEORY!?

First-order set theory involves sets and symbols, in a very wonky language. Zero is represented with ten symbols, which sounds like a lot, but it gets larger faster. Larger numbers can be created with the symbols given though. Just think about what you can make with up to $10^{100}$ symbols! No one knows how big this number is, because there's so many things you can make. #\operatorname{Rayo}(10)=0=(¬∃x_2(x_2∈x_1))\newline \operatorname{Rayo}(30)=1=(∃x_2(x_2∈x_1)∧(¬∃x_2((x_2∈x_1∧∃x_3(x_3∈x_2)))))# (The function $\operatorname{Rayo}(x)$ represents the largest value in $x$ or fewer symbols. Be careful! That also means that $\operatorname{Rayo}(11)$, $\operatorname{Rayo}(12)$, and all values until $\operatorname{Rayo}(29)$ equal zero.) This is one of the LARGEST named numbers! We know that $\operatorname{Rayo}(728+75x)>2↑↑x$, and, with just $7339$ symbols, we can surpass $\operatorname{BB}(2^{65536}-1)$.

Just like with the Busy Beaver, the amount of possible formulas you can create is pretty small, as you get a limited set of symbols, and only some actually form something valid. However, even with the limited quantity of equations creatable, the values can be HUGE.
Can we go bigger? #\operatorname{Rayo}(10^{100})+1=Z(0,10^{100})\newline \operatorname{Rayo}(x)=Z(0,x)\newline Z(1,x)=\text{The smallest number bigger than any number obtainable by}\newline\text{ using less than }x\text{ symbols, where an additional symbol,}\operatorname{Rayo}(x)\text{, is given.}# Why add one? Rayo is the smallest number bigger than all representable values, but $H$ is the smallest number larger. This can also continue, so for $H(2,x)$, you have access to the values $Z(1,x)$ and $Z(0,x)$ (and find the smallest number larger than all finite values). For $Z(3,x)$, you get $Z(2,x)$, $Z(1,x)$, and $Z(0,x)$, and so on. These values only cost one symbol, so the new numbers you can obtain get much bigger. As you might expect, each increase of $n$ in $Z(n,x)$ creates a number much bigger than $H(n-1,x)$.
Except...this is a great example of a salad number. A salad number is a number that appears to be much bigger (which it would noramlly be.). Unfortunately, in terms of googology, it's pretty much the same. For example, a factorial will ALWAYS be powerful by conventional standards, but in terms of googology, it isn't important. (In other cases, salad numbers may not have a good defintion, such as the $Z$ function we made up. In this case, how do we represent the Rayo number as a symbol?)

At the end of the day, it doesn't matter if you're in a large number competition with your friend. They'll likely beat you by saying "whatever you said plus one," or cheat out and say "infinity." Lesson? Be sure to set the rules like they did at the MIT contest!

The End
Copyright 2024 Leo Zhang. All rights reserved (well, until copyright laws take effect).
Thank you so much for reading everything (or, at least, I hope you did)! It took me a long time to write all of this and write the $\LaTeX$ formulas, and I know that I learned a lot when making this too.