# Reading > Philosophical Literature >  Cosmology

## desiresjab

To me there is no greater or more interesting subject than the universe itself. I have no technical training in cosmology. Everything I know or _know about_ is from reading and self education on the subject. Not an expert, just another person with opinions and questions.

A question that nags me constantly is whether numbers preceded the laws of physics in the creation of the universe. Did number as some kind of Platonic _ideal form_ precede the universe itself? It seems that possibly the laws of arithmetic had to precede our universe. Would not the laws of arithmetic be invariant across all universes, if there are more than one? I can understand a different periodic table with strange elements in other universes, but a different arithmetic is hard for me to imagine. We ourselves have constructed algebras where AxB does not equal BxA. That was not our natural arithmetic. Are other universes posiible where, for instance it was the other way around. In some other universe did they have to invent a weird algebra to make AxB equal BxA? Even if they did, 2 still meant two to them, didn't it?

Perhaps this question is interesting enough to draw some comments. It seems to stay with me.

----------


## YesNo

I'm no expert in these matter either, but I do like to express my opinion often because I don't know what my opinion even is.

Regarding numbers, I would agree with you that they seem to transcend space and time which is what our universe is limited by. However, I was under the impression that the matrix mathematics modeling quantum physics is non-commutative, that is AxB does not have to equal BxA. A and B in this case are matrices not numbers.

The idea of "forms" is interesting. They don't seem to be substantial and so they don't seem real. There may be all kinds of reality that do not seem substantial, but which are still real. Electromagnetic fields might be one example of this. Their only substance is that for any point of space we can assign a value to their strength which confirms with experiment. But there's nothing there. To bring this even closer to home, are species real, ontological things or are they just a way to model living reality? If one takes Niles Eldredge's punctuated equilibria idea of evolution, they are real. That would mean we are part of a larger living reality. 

Another thing about our universe is that it is constrained by its space and time and these two are tied together by the maximum limit on the speed of light. Being in the universe can then be defined as being subjected to this maximum speed limit which is called "locality". But there are non-local phenomena observed through entangled particle experiments. So there is a non-local, not-inside-the-space-and-time-universe reality whose effects we can observe. 

OK, I've rambled enough.

----------


## Eupalinos

I too apologize for my ignorance, which I'm confident is greater than either of yours, but a question I have that may seem very naive is why the limits of the universe are ascribed to space and time. Are not space or time perceptions we have of something, and not the something itself? Just as we perceive a color or sound, unaware of light or sound waves themselves. What we call space, time, sound, color are interpretations of perceptions, no? Therefore why are space and time distinguished as more important than any other ways we perceive reality?

Don't the laws of physics, in our own universe, change according to the nearness of the big bang, that is the further back one goes towards the beginning? So I rashly assumed the physics of a universe were probably unique and that if there are other universes their laws would likely evolve in a different way. I hope someone more knowledgable will chime in.

----------


## North Star

> To me there is no greater or more interesting subject than the universe itself. I have no technical training in cosmology. Everything I know or _know about_ is from reading and self education on the subject. Not an expert, just another person with opinions and questions.
> 
> A question that nags me constantly is whether numbers preceded the laws of physics in the creation of the universe. Did number as some kind of Platonic _ideal form_ precede the universe itself? It seems that possibly the laws of arithmetic had to precede our universe. Would not the laws of arithmetic be invariant across all universes, if there are more than one? I can understand a different periodic table with strange elements in other universes, but a different arithmetic is hard for me to imagine. We ourselves have constructed algebras where AxB does not equal BxA. That was not our natural arithmetic. Are other universes posiible where, for instance it was the other way around. In some other universe did they have to invent a weird algebra to make AxB equal BxA? Even if they did, 2 still meant two to them, didn't it?
> 
> Perhaps this question is interesting enough to draw some comments. It seems to stay with me.


In some ways, I have to disagree with you. Mathematical operators would have to work the same way as it does in our universe, as the logic isn't bound by anything 'real', but our decimal system wouldn't mean anything to creatures that didn't develop mathematics using their ten fingers to count things. The nuclear strong force, weak force, gravitation and electromagnetism that are the four fundamental forces would still govern the universe, though. All the elements in the periodic table, and their infinite different compounds are formed in accordance with these forces. Another planet or whatnot may have a different ratio of these elements (and other particles, such as gravitons and graviolis), but I doubt that there could be elements that couldn't exist or be manufactured on Earth or in a supernova or some such.





> I too apologize for my ignorance, which I'm confident is greater than either of yours, but a question I have that may seem very naive is why the limits of the universe are ascribed to space and time. Are not space or time perceptions we have of something, and not the something itself? Just as we perceive a color or sound, unaware of light or sound waves themselves. What we call space, time, sound, color are interpretations of perceptions, no? Therefore why are space and time distinguished as more important than any other ways we perceive reality?
> 
> Don't the laws of physics, in our own universe, change according to the nearness of the big bang, that is the further back one goes towards the beginning? So I rashly assumed the physics of a universe were probably unique and that if there are other universes their laws would likely evolve in a different way. I hope someone more knowledgable will chime in.


We sense wave motion as sound and colour. Motion happens in space and time, they are as real as the pressure waves producing the sound. And space and time are rather important. E.g. The Pauli exclusion principle states that two identical fermions can't occupy the same quantum state simultaneously. I would also be quite intolerant towards any attempt to sit on the chair I happen to be occupying at any given moment. You can sit there some other time, or you can sit then in some other place, and I don't care if you have a green or a red shirt. If you happen to be listening to something good, I might be more willing to make allowances, though.

The 'laws' of physics don't change, even though the matter changes. The law doesn't change when you get old enough to be able to legally drink alcohol.

----------


## Eupalinos

Thanks for the comments, North Star. Why, though, from philosophers to physicists, have space and time been placed ABOVE everything else? Why is wave motion, say, not of equal importance? Can space and time be experienced any more objectively than wave motion? (It seems implied in a lot of philosophy that it can be and is.)

----------


## Eupalinos

The universe functioned very differently at its birth than it does now, isn't that right? I thought the laws (some say habits) evolved over time.

----------


## North Star

> The universe functioned very differently at its birth than it does now, isn't that right? I thought the laws (some say habits) evolved over time.


The laws are the same forever, and everywhere - that is why they are called universal laws. The universe has evolved, and interactions of particles and forces have become more varied, but the laws are still the same. As a person is born and grows up, their behaviour, rights, responsibilities and interactions with the world change, but that doesn't necessarily mean that the laws and norms of the society or the physical world have also changed.

----------


## Eupalinos

I've read that precisely this idea of universal laws has been a point of contention among physicists. There's been evidence that has led to the speculation that the laws are rather an equivalent to local by-laws. (These analogies to civic life in any case somehow don't seem quite apt.) The fine-structure constant has been observed to vary in space. There is endless speculation of what is theoretically possible in other universes.

----------


## North Star

> I've read that precisely this idea of universal laws has been a point of contention among physicists. There's been evidence that has led to the speculation that the laws are rather an equivalent to local by-laws. (These analogies to civic life in any case somehow don't seem quite apt.) The fine-structure constant has been observed to vary in space. There is endless speculation of what is theoretically possible in other universes.


Sure. There have been plenty of laws discovered that have turned out to not work in all circumstances. That does not necessarily mean that there are no laws, however. It might just mean we don't know precisely what the laws (or constants) actually are.

----------


## desiresjab

Some excellent responses. I think we have not discovered all the laws of physics yet. The ones we have so far are correct but not the whole picture. Our attempt is to express any anomalies or new discoveries with only those laws we know so far. That is all we can do until/if more laws of physics are discovered.

To answer YesNo. Yes, the non-commutability does apply to matrices not natural numbers. It is still a different arithmetic, a weird one to us. My speculative question was whether in other universe beings would would have to invent something that seemed strange to them that allowed AxB to equal BxA. I am not sure the question is very good.

----------


## YesNo

Regarding laws of physics, I agree with North Star that they aren't supposed to change. This is more of a convenient assumption on our parts. We have to assume that what we can verify in the here and now will also work in any there and then we might imagine.

I think of the laws of physics as a special form of literature, a kind of sacred text, that we take as literally true (until someone can convince us these texts are false). We tend to forget that people wrote the laws of physics for the use of people. They are not out there in reality. They are only models. We hope they are a good representation for whatever is really out there, but as long as they work well enough for our current purposes they are probably fine.

One of the problems with the laws of physics is that they were written in mathematical languages. Mathematics is deterministic with constants like pi that are precise to arbitrary many decimal places. This feature makes us think that reality is the same way. But I agree with Eupalinos that reality is most likely not, given what has been found out about the uncertainty in quantum physics. Even the constants used in the laws of physics may not be constant in reality like pi is in mathematics.

----------


## desiresjab

> Regarding laws of physics, I agree with North Star that they aren't supposed to change. This is more of a convenient assumption on our parts. We have to assume that what we can verify in the here and now will also work in any there and then we might imagine.
> 
> I think of the laws of physics as a special form of literature, a kind of sacred text, that we take as literally true (until someone can convince us these texts are false). We tend to forget that people wrote the laws of physics for the use of people. They are not out there in reality. They are only models. We hope they are a good representation for whatever is really out there, but as long as they work well enough for our current purposes they are probably fine.
> 
> One of the problems with the laws of physics is that they were written in mathematical languages. Mathematics is deterministic with constants like pi that are precise to arbitrary many decimal places. This feature makes us think that reality is the same way. But I agree with Eupalinos that reality is most likely not, given what has been found out about the uncertainty in quantum physics. Even the constants used in the laws of physics may not be constant in reality like pi is in mathematics.


Mathmatics is not a problem. Well, it is a problem when we cannot understand it and would like to. I run into math beyond the boundaries of my knowledge all the time.

The misconception I find with non math folk in my experience is not exactly _This feature makes us think that reality is the same way_, but it is related perhaps. Many people do not understand that proofs in science and proofs in math are not done the same way. Scientific "proofs" consist of the repeatability of experimental results. Then the math is worked out and confirms the results. Sometimes the math comes first. It might lie there practically unnoticed for generations until a smart scientist notices the connection to his experiments. Pure mathematical proofs require no more than some sand and a finger.

We are all familiar with the phrase _the unreasonable effectiveness of mathematics_. Some mathematicians have pointed out that they think _the unreasonable ineffectiveness of mathematics_ is more realistic. They say mathematics is dreadfully ineffective everywhere that it is not unreasonably effective, which is a lot of places.

A few scientists are currently exploring the hypothetical connection between consciousness and quantum physics. What a monumental task! Is mathematics even suited for the job? Apparently they are using Li Algebra for some of the work, according to Ed Mitchell. I have a suspicion that new mathematics will have to be invented. I have never done any Li algebra, so can't say much about it or its chances of being successful. It is interesting and encouraging that high powered minds are now taking up this challenge.

I have my own theories about consciousness and quantum physics, but they are lay fantasies not supported (or unsupported) by math. We are in between the scale of the universe and the scale of the atom. I do not know if it is my own phrase, but I call it _quantum leakage into our scale_. I have a suspicion (not a belief) that alpha religious experiences, dreams, ESP, all of what we call psychic phenomena are due to quantum leakage into our scale. Dreams seem to share many characteristics with quantum "reality." I could easily believe that a hundred years or more will pass before any progress is made in this endeavour. Maybe never. We may meet our limits somewhere, and this could be the place.

----------


## YesNo

I haven't read Wigner's paper on the unreasonable effectiveness of mathematics. Here is a wikipedia article: https://en.wikipedia.org/wiki/The_Un...tural_Sciences This is probably the article: http://www.dartmouth.edu/~matc/MathD...ng/Wigner.html

I don't think mathematics needs to describe some natural phenomenon. For example, transfinite numbers don't seem to have much use in our finite universe. Only some mathematics has use-value in the natural sciences. 

I like your idea of quantum leakage, but I don't understand it. Regarding supernormal phenomena (to use Dean Radin's term), I assume there is more going on than we are culturally willing to admit.

Although natural sciences rely heavily on mathematics to make predictions other sciences such as economics or psychology don't unless they are processing data. They still make predictions based on theory (rules or laws). Sometimes the predictions actually come true which is why people want to get the advice of economists and psychiatrists. Even a reader of Tarot cards is making predictions based on a theory, rules or laws that they interpret and people pay for those services for the same reason they pay for a physicist's or economist's or a psychiatrist's predictions: the predictions provide use-value to them.

But think of what each of these disciplines attempting to make predictions imply about reality. They are not consistent views. The physicist using deterministic mathematics implies that reality is deterministic and completely reducible to unconscious stuff. There is no need for consciousness in that view. The economist and psychiatrist assumes there is some form of individual consciousnesses making choices but nothing more than that. The Tarot reading implies there is some sort of psychic reality enveloping those individual consciousnesses.

The problem is that predictions on all these levels work reasonably well enough that people are willing to pay physicists, economists, psychiatrists and Tarot readers for their services. They are all unreasonably effective, or effective enough to provide use-value to others.

----------


## desiresjab

Comparing Tarot cards to mathematics and science seems like a bad analogy from the start. Tarot card prediction can do no better than 50% in the long run, and is pure charlatanism, hardly good enough for science. In fact, statistics can be used to disprove the use value of Tarot cards to anyone who can understand the reasoning. Tarot cards are no better at prediction than religion or soothsayers. _Well, it's useful to me, dammit_, is hardly good enough either. This is no more logical than people standing on the word Faith to actually mean, _I will believe any illogical thing I want, and you cannot prove I am wrong_. 

Calculus uses both infinitesimal and infinite "quantities" with wonderful and undeniable results. Complex numbers were useless to begin with, other than providing a theoretical basis for solutions we already knew existed, but have now found their way into many fields where they are gainfully employed with everyday jobs. Transfinite math may well have its day in applied math. No one can say. 

Determinism is not a mathematical construct. Mathematics is neutral, unless you want to claim that two must follow one is determinism. The fact that the square root of 64 must be 8 does not to me imply that the physical universe is deterministic. There is no physical cause and effect in math, other than the trivial genre I just cited. Maybe you have an idea which could change my mind on this, since it is not ironclad yet. Like a telescope, math neutrally facilitates and organizes logical observation, without standing for a particular view of the universe.

Maybe you feel the trivial determinism of math "leaks" into science, or pours in. Maybe it does. And surely I cannot prove otherwise. It is an interesting question.

----------


## YesNo

> Comparing Tarot cards to mathematics and science seems like a bad analogy from the start. Tarot card prediction can do no better than 50% in the long run, and is pure charlatanism, hardly good enough for science. In fact, statistics can be used to disprove the use value of Tarot cards to anyone who can understand the reasoning. Tarot cards are no better at prediction than religion or soothsayers. _Well, it's useful to me, dammit_, is hardly good enough either. This is no more logical than people standing on the word Faith to actually mean, _I will believe any illogical thing I want, and you cannot prove I am wrong_.


I am just trying on ideas here. However, I mentioned Tarot cards because I was expecting you would not see them as a "science". Some people don't see psychiatry as a science either. Or economics, but they all make predictions based on patterns that form part of their theories.

Just because someone has a theory doesn't mean it is good at making predictions. There are many theories I don't believe in, such as, the belief that the world will end with the coming blood moon. We will see if that prediction holds true in a few weeks. I put it right up there with the interpretation of quantum theory called "many worlds". 

With regard to the Tarot, I think there might be something to it, but I am still trying to make sense out of what that is. I can see the card patterns as a kind of prompt to stimulate the intuition of the reader to come up with a prediction. This would be similar to someone giving a prompt in a writing exercise. But why should the prediction work, that is, be useful to the hearer? It could be that the hearer makes the prediction work by acting to encourage or discourage the prediction. More shocking, at least to our modern biases, there could be a psychic reality that we are a part of that is deliberately talking to us through things like these card patterns. This would be an interpretation of the Tarot that would imply that there is more going on than just a prompt. It would be like saying the muses are real.




> Calculus uses both infinitesimal and infinite "quantities" with wonderful and undeniable results. Complex numbers were useless to begin with, other than providing a theoretical basis for solutions we already knew existed, but have now found their way into many fields where they are gainfully employed with everyday jobs. Transfinite math may well have its day in applied math. No one can say.


The calculus uses the idea of a "limit". This avoids actually working with a zero in the denominator. So infinity is never actually used. Complex numbers could be represented by 2 by 2 matrices if the "imaginary" number i referring to the square root of -1 is a problem. One of the objections to transfinite numbers, such as that given by Leopold Kronecker, is that they have no physical representation and so they should not be considered part of mathematics.

Regarding the unreasonable effectiveness of mathematics, people think mathematics _is_ reasonably effective. That may be a cultural bias. This is another reason why I bring up economics, psychology and the Tarot. It is more easy for people to see that the patterns studied by these theories, to the extent they work, are "unreasonably effective". They don't believe that about mathematics.




> Determinism is not a mathematical construct. Mathematics is neutral, unless you want to claim that two must follow one is determinism. The fact that the square root of 64 must be 8 does not to me imply that the physical universe is deterministic. There is no physical cause and effect in math, other than the trivial genre I just cited. Maybe you have an idea which could change my mind on this, since it is not ironclad yet. Like a telescope, math neutrally facilitates and organizes logical observation, without standing for a particular view of the universe.
> 
> Maybe you feel the trivial determinism of math "leaks" into science, or pours in. Maybe it does. And surely I cannot prove otherwise. It is an interesting question.


That two must follow one is the basis for the determinism I am talking about. A mathematical function with time as an input parameter becomes a deterministic model. That is why it is a problem. 

Suppose there existed a world function. One can then start with an input state and get any past or present state as the output. As I understand quantum physics, such a world function cannot exist. All one can get is a deterministic "wave function" which only gives a non-random range of probabilities for a particular state.

Because of that one is forced to ask: if the universe really isn't deterministic as previously believed, why does mathematics work as well as it does? It is just as unreasonably effective as economics, psychiatry or even the Tarot.

----------


## desiresjab

World functions? We will get to that.

I use the infinity symbol all the time in calculus. Zero to infinity are typical bounds for an integral. Just because we have worked out techniques to avoid doing actual arithmetic calculations with infinities and infinitesimals does not mean we are not working with them, it means precisely we are working with them, including them as part of the maths family. Moreover, our methods for doing so are infallible where they apply. Infinity is an idea, not a number. You cannot subtract a number from an idea. The Limit was a new function in mathematics to work with this idea. Many predecessors of Newton and Leibniz almost got it, or had a piece of it tamed and hints of its methods. This broken chain goes all the way back to Archimedes.

Kronecker, unfortunately for his ideas and arguments that cite him for credibility, said the same thing about transcendental numbers--they were not real, as in actual, they were superfluous baggage. He tortured and harried the more sensitive Cantor who had not been born a millionaire. Kronecker's inherited wealth made him free to incorporate his eccentricity in mathematical philosophy. He was a talented whacko who believed any number system beyond the rational would eventually be proved superfluous, all secrets of nature and logic in the end yielding themselves up to mere integers and basic arithemetical operations. If there ever was a world function, it would be in integers. To a Platonist this idea is immensely appealing and hard to justify. I would like to believe it, but I also believe the irrational number pi is actual in our universe because it is operational there. Not only is pi irrational, it is transcendental, meaning it forms a subclass more numerous than the class it comes from. There are more transcendentals than there are algebraic irrationals. Great mathematicians worked hard to prove such propositions, all of which was nothing but hogwash to Kronecker. Kronecker worked hard to recast irrationals as mere rationals in another clothing. Transcendentals would not yield. No wonder he hated them.

Numbers can be used for chicanery. When non-mathematical readers see pages of advanced statistical forumlas in the appendix of a book, it is very impressive, and seems to stamp the good househeeping seal of approval on everything within. There was a book called _The Bible Code_ a few years back. Oprah pushed it on her show. It had all these massive forumlas and calculations in back to support its silly theory.

To me Tarot cards are as pure hogwash as irrational and transfinite numbers were to Kronecker. That their psuedo-informationan can sway some human minds is undeniable. That has nothing to do with the cards themselves. The subject came prepared to be moved, already conditioned by years of cultural superstition. Spirit trumpets, ouija boards, pyramid power, soothsaying, etc., are mere stage decorations, contributing nothing to the action. The mind in this case is its own cause and effect. Those instruments and techniques come from our race's childhood, and I am no more impelled to give them credence than the I am the words of goat herders from four thousand years ago who had visions on spoiled cheese. How the mind responds to superstition and can turn it into belief is the underlying field. Tarots cards and the others are just part of the charlatanistic hangover from 19th century supernaturalism. They are no more interesting than objects of any other superstition.

One following two does not require time, in spite of linguistics. That is your mistake. One follows two, it always will, it always has, that does not mean it happens at a different time. You lead yourself astray with equations involving time.

When charlatans use mathematics for chicanery that says nothing about how effective mathematics is or is not. It is not effective in some fields, for instance, because those fields are mere bunko to begin with. It is effective in showing how foolish some notions are instead of the opposite. Because I cannot prove that a black cat running across one's path is not bad luck, does not mean that with valid mathematics I cannot demonstrate with high reliability that the superstition is bunko, despite any changes in the person's behavior due to cultural conditioning and other parametric adjustments. All I need are enough cats and honest subjects. In this way mathematics can be used to illustrate if not prove the silliness of superstitions to reasonable people, just as it could be used to support the observational evidence for the injunction against incest.

One following two is independent of time, therefore of physics. When the big bang spit out the laws of physics there was no need to spit out that one follows two. Two can only follow one, but it does not do it a certain time later. A universe with different physics can be imagined, but a universe where one is not followed by two cannot. They are independent of each other. To cite as an exception a universe which somehow runs backwards does nothing to discourge my belief that such arguments are mere semantics. One also follows two in a universe which runs backwards, since the notion has nothing to do with time. In fact, one following two has no more to do with physics than a microscope has with the laws of biology.

I do not have to say one follows two. I can say two is the whole number beside and greater than one, to get rid of language that seems to suggest something happening in sequential time. Determinism involves sequencing in time, one being a smaller whole number than two does not, is the point. Notice that my new phrasing seemed to transfer responsibility from time to space. Mere linguistic limitation in action. One is the smaller neighbor of two is independent of both space and time.

----------


## YesNo

> I use the infinity symbol all the time in calculus. Zero to infinity are typical bounds for an integral. Just because we have worked out techniques to avoid doing actual arithmetic calculations with infinities and infinitesimals does not mean we are not working with them, it means precisely we are working with them, including them as part of the maths family. Moreover, our methods for doing so are infallible where they apply. Infinity is an idea, not a number. You cannot subtract a number from an idea. The Limit was a new function in mathematics to work with this idea. Many predecessors of Newton and Leibniz almost got it, or had a piece of it tamed and hints of its methods. This broken chain goes all the way back to Archimedes.


Since both the infinitesimal and the infinity symbol are short for a limit process, infinity itself is not part of calculus. 




> Kronecker, unfortunately for his ideas and arguments that cite him for credibility, said the same thing about transcendental numbers--they were not real, as in actual, they were superfluous baggage. He tortured and harried the more sensitive Cantor who had not been born a millionaire. Kronecker's inherited wealth made him free to incorporate his eccentricity in mathematical philosophy. He was a talented whacko who believed any number system beyond the rational would eventually be proved superfluous, all secrets of nature and logic in the end yielding themselves up to mere integers and basic arithemetical operations. If there ever was a world function, it would be in integers. To a Platonist this idea is immensely appealing and hard to justify. I would like to believe it, but *I also believe the irrational number pi is actual in our universe because it is operational there.* Not only is pi irrational, it is transcendental, meaning it forms a subclass more numerous than the class it comes from. There are more transcendentals than there are algebraic irrationals. Great mathematicians worked hard to prove such propositions, all of which was nothing but hogwash to Kronecker. Kronecker worked hard to recast irrationals as mere rationals in another clothing. Transcendentals would not yield. No wonder he hated them.


What do you mean by pi being operational in our universe? I am reading a book by Frost and Prechter, "Elliott Wave Principle". They think the golden ratio is operational in our universe, specifically in the social mood that drives the stock market. Is this the sort of thing you are referring to?




> Numbers can be used for chicanery. When non-mathematical readers see pages of advanced statistical forumlas in the appendix of a book, it is very impressive, and seems to stamp the good househeeping seal of approval on everything within. There was a book called _The Bible Code_ a few years back. Oprah pushed it on her show. It had all these massive forumlas and calculations in back to support its silly theory.
> 
> To me Tarot cards are as pure hogwash as irrational and transfinite numbers were to Kronecker. That their psuedo-informationan can sway some human minds is undeniable. That has nothing to do with the cards themselves. The subject came prepared to be moved, already conditioned by years of cultural superstition. Spirit trumpets, ouija boards, pyramid power, soothsaying, etc., are mere stage decorations, contributing nothing to the action. The mind in this case is its own cause and effect. Those instruments and techniques come from our race's childhood, and I am no more impelled to give them credence than the I am the words of goat herders from four thousand years ago who had visions on spoiled cheese. How the mind responds to superstition and can turn it into belief is the underlying field. Tarots cards and the others are just part of the charlatanistic hangover from 19th century supernaturalism. They are no more interesting than objects of any other superstition.


I would need to see the evidence not just an assertion. I don't think it has to do with the cards themselves either. It could be a pendulum or tea leaves or I-ching yarrow sticks.




> One following two does not require time, in spite of linguistics. That is your mistake. One follows two, it always will, it always has, that does not mean it happens at a different time. You lead yourself astray with equations involving time.


All you have to do is allow t to represent time and assume time can vary continuously and you can create a function using t. Once you have that, I don't see how you can avoid determinism.




> When charlatans use mathematics for chicanery that says nothing about how effective mathematics is or is not. It is not effective in some fields, for instance, because those fields are mere bunko to begin with. It is effective in showing how foolish some notions are instead of the opposite. Because I cannot prove that a black cat running across one's path is not bad luck, *does not mean that with valid mathematics I cannot demonstrate with high reliability that the superstition is bunko*, despite any changes in the person's behavior due to cultural conditioning and other parametric adjustments. All I need are enough cats and honest subjects. In this way mathematics can be used to illustrate if not prove the silliness of superstitions to reasonable people, just as it could be used to support the observational evidence for the injunction against incest.


Why do you want to believe it is not bad luck? I'm not saying it is. 




> One following two is independent of time, therefore of physics. When* the big bang spit out the laws of physics* there was no need to spit out that one follows two. Two can only follow one, but it does not do it a certain time later. A universe with different physics can be imagined, but a universe where one is not followed by two cannot. They are independent of each other. To cite as an exception a universe which somehow runs backwards does nothing to discourge my belief that such arguments are mere semantics. One also follows two in a universe which runs backwards, since the notion has nothing to do with time. In fact, one following two has no more to do with physics than a microscope has with the laws of biology.
> 
> I do not have to say one follows two. I can say two is the whole number beside and greater than one, to get rid of language that seems to suggest something happening in sequential time. Determinism involves sequencing in time, one being a smaller whole number than two does not, is the point. Notice that my new phrasing seemed to transfer responsibility from time to space. Mere linguistic limitation in action. One is the smaller neighbor of two is independent of both space and time.


The big bang did not create the laws of physics. Human beings created those laws to help human beings work with nature. The same thing applies to the Bible or the Quran. God did not write those works. Human beings did.

----------


## desiresjab

Human beings did not create the laws of physics, they discovered them and wrote them down. That is called formulating. Minus thirty-two feet per second per second would still be the acceleration of an object falling to earth, whether or not a man ever existed to formulate it, or the second derivative of the position equation, if you will. Physics does not equate to the Bible and the Koran well except broadly as _sytems that explain things_. In the superstition system the best story tellers explained things, and it was left for the best observers under the scientific system to correct the ancients almost universally on every point later, except where they themseves had been mathematical in the approach to understanding "nature." Ancient mathematics is trustworthy, ancient physics often is not. Archimedes knew the volume of a sphere. Aristotle's explanation of a falling object's acceleration to be like that of a horse that runs faster as it approaches the barn, is pretty weak beside -16t^2. 

I know that cosmology implies discussion of physics, but discussion can only be facilitated at this point by separating math and physics for the moment, to show that they are as independent as stars and telescopes. Math has no more effect on physics than telescopes have on stars, but provides a similar service.

A little man keeps whispering in your ear about equations with t as time. We are not as far as time in the discussion yet. You are like Aristotle's horse.

*A number line does not need sequential time where causality happens*. The truth that _two follows one_, or that two is the _greater-in-magnitude-neighbor of one_, is independent of time or space. I need you to see that.

Either agree to this or explain to me some properties of a universe where two is not the larger neighbor of one. Don't try any semantic arguments. Failure of English to state the proposition without apparent reference to space or time, is a weakness of language not number. Running the universe backwards will not work either, because that can be cured by adding a negative sign to reverse actions performed backwards, and could still be considered. Math and physics are independent. The discussion of what they _are_ can be left for later. Right now, you are forced to admit they are independent, or describe the physics of a universe where two is not the successor of one. If two is not the successor of one, then there is no two at all. If two did not exist, it would shortly have to be invented anyway, like we invented the number i for the square root of -1. 

The calculus discussion is no more than a sidebar where we are sparring to sharpen our swords. I have no actual protest over your stance. It is the stance of most calculus teachers, who are not paid to be philosophers. They repeat the orders of higher priests who are freakishly adverse to contradiction and demand that every proposition be on logical theoretical footing. Never mind they had to add a bunch of words to explain how they made it so. A limit exists to deal with infinity. We either have a way of dealing with infinity or we do not. We have a way. That means we are using it. I consider the dispute here to be semantic. That argument is so old it bores. You understand mathematicians were integrating from 0 to infinity with great success long before they made it technically illegal? Infinity and infinitesimals are at the very heart of calculus. Ways to deal with them and get back results is what calculus is. 

Like I said, though, a mere sidebar.

Pi is present in the formulations of ubiquitous patterns we observe in nature. It is vital to scientific calculations of all kinds. Its relationship to geometric figures and natural numbers is well documented. Like e, it is a very special number intimately related to "the way things work" as well as the way things have to work. We know it exists and cannot produce it directly. Rough copies of it work just fine, depending on the precision needed for the application. We could always pretend that it does not exist, but we would have to admit that the organization of our universe is based on something that does not exist. The fact that these important numbers keep coming up again and again is enough to say they are operational. Do not mistake this for saying they are part of the causual train. They *do* nothing.

But that is physics. You know what the real holdup is. Make one of your choices, please.

----------


## YesNo

> Human beings did not create the laws of physics, they discovered them and wrote them down. That is called formulating. Minus thirty-two feet per second per second would still be the acceleration of an object falling to earth, whether or not a man ever existed to formulate it, or the second derivative of the position equation, if you will. Physics does not equate to the Bible and the Koran well except broadly as _sytems that explain things_. In the superstition system the best story tellers explained things, and it was left for the best observers under the scientific system to correct the ancients almost universally on every point later, except where they themseves had been mathematical in the approach to understanding "nature." Ancient mathematics is trustworthy, ancient physics often is not. Archimedes knew the volume of a sphere. Aristotle's explanation of a falling object's acceleration to be like that of a horse that runs faster as it approaches the barn, is pretty weak beside -16t^2.


Since the laws of physics change as our understanding of nature changes, they were made by humans. They are texts. My view is that to believe our texts are out there in reality is the same as believing that God wrote the Bible.

Also you have introduced determinism through t in "-16t^2".




> I know that cosmology implies discussion of physics, but discussion can only be facilitated at this point by separating math and physics for the moment, to show that they are as independent as stars and telescopes. Math has no more effect on physics than telescopes have on stars, but provides a similar service.
> 
> A little man keeps whispering in your ear about equations with t as time. We are not as far as time in the discussion yet. You are like Aristotle's horse.


I have no problem with separating mathematics from physics.




> *A number line does not need sequential time where causality happens*. The truth that _two follows one_, or that two is the _greater-in-magnitude-neighbor of one_, is independent of time or space. I need you to see that.


I can see that two follows one in any universe.




> Either agree to this or explain to me some properties of a universe where two is not the larger neighbor of one. Don't try any semantic arguments. Failure of English to state the proposition without apparent reference to space or time, is a weakness of language not number. Running the universe backwards will not work either, because that can be cured by adding a negative sign to reverse actions performed backwards, and could still be considered. Math and physics are independent. The discussion of what they _are_ can be left for later. Right now, you are forced to admit they are independent, or describe the physics of a universe where two is not the successor of one. If two is not the successor of one, then there is no two at all. If two did not exist, it would shortly have to be invented anyway, like we invented the number i for the square root of -1.


I agree that math and physics are independent. Since math leads to determinism and the universe is not deterministic, they are not the same.




> The calculus discussion is no more than a sidebar where we are sparring to sharpen our swords. I have no actual protest over your stance. It is the stance of most calculus teachers, who are not paid to be philosophers. They repeat the orders of higher priests who are freakishly adverse to contradiction and demand that every proposition be on logical theoretical footing. Never mind they had to add a bunch of words to explain how they made it so. A limit exists to deal with infinity. We either have a way of dealing with infinity or we do not. We have a way. That means we are using it. I consider the dispute here to be semantic. That argument is so old it bores. You understand mathematicians were integrating from 0 to infinity with great success long before they made it technically illegal? Infinity and infinitesimals are at the very heart of calculus. Ways to deal with them and get back results is what calculus is. 
> 
> Like I said, though, a mere sidebar.


Yes, it is a sidebar, but I am glad you acknowledge that calculus is based on limits to be logically consistent. Doing this does avoid philosophic arguments.




> Pi is present in the formulations of ubiquitous patterns we observe in nature. It is vital to scientific calculations of all kinds. Its relationship to geometric figures and natural numbers is well documented. Like e, it is a very special number intimately related to "the way things work" as well as the way things have to work. We know it exists and cannot produce it directly. Rough copies of it work just fine, depending on the precision needed for the application. We could always pretend that it does not exist, but we would have to admit that the organization of our universe is based on something that does not exist. The fact that these important numbers keep coming up again and again is enough to say they are operational. Do not mistake this for saying they are part of the causual train. They *do* nothing.
> 
> But that is physics. You know what the real holdup is. Make one of your choices, please.


You have told me that physics and math are separate and now you say that pi is part of ubiquitous patterns in nature. I think the golden ratio can also be seen in nature, but what one gets are not exact examples. They are, however, close enough that one can use pi or the golden ratio in theories that try to model reality. Theories about reality and reality are not the same thing.

----------


## desiresjab

How does it feel to never budge your feet? 

There is nothing else I can do with you, since you make no effort. Go ahead and admit the calculus argument is mere semantics, then follow that as if it is somehow relevant to anything that I used t to represent time in -16t^2.

You are not capable of considering a proposition requiring no time or space. The abstraction is too much. Since you could not stick with the first question I proposed and absolutely refuse to consider it, you really do not expect me to go on with you, do you?

----------


## desiresjab

Here is the question one more time, in simplified form. Would two be the successor of one in any universe, despite its physical laws?

If answer is no, provide example universe.

If answer is yes, this means the proposition is independent of any universe that could have developed.

Yes or no, please. Worry later about what each answer might imply for physics and determinism.

----------


## YesNo

> Here is the question one more time, in simplified form. Would two be the successor of one in any universe, despite its physical laws?


Yes.  :Smile: 




> If answer is no, provide example universe.
> 
> If answer is yes, this means the proposition is independent of any universe that could have developed.
> 
> Yes or no, please. Worry later about what each answer might imply for physics and determinism.

----------


## Ecurb

"Laws" (of physics or an science) are "theoretical principles deduced from the observation of facts".

Objects do not accelerate at 32 feet per second per second because of the law of gravity; the law exists because objects accelerate at that rate. Falling objects accelerated at that rate before the law existed, before language existed. "Laws" are linguistic creations.

----------


## desiresjab

> Yes.


Good. Thank You. This means that the notion of two being the successor of one has priority over "_the ways things physically work_" in any universe that could ever come into being. To me it does. They have no way of contradicting it. 

These underlying notions of mathematics have some kind of priority in any universe imagineable, but I cannot tell you how or if they change or shape "_the way things work_." I don't think they do. They are just something that is, somehow implicit in thought that cannot conceive it otherwise. Just the notion of singularity itself implies duality by definition.

Space is not sacred, time is not sacred. We now conceive of them differently than all the centuries that preceded the 20th. But _two is the successor of one_ stands as firmly as ever, immutable, unchanging, impossible to be otherwise.

I think that is pretty cool, and stunning, once it is accepted, that something has priority to be itself in any possible universe. Does that amount to a constraint on time or matter? It never gets involved, yet always is there, and never shares with its opposite notion.

Before any laws were written, could any other have been written? Events within a universe running backwards would still be labled 1,2,3...

Notice that would actually be recorded in backwards fashion to the inhabitants, and two still succeeds one.

Without perplexing and sophistic observations of so-called proofs, like the one in the preceding paragraph. I am happy with the confession and realization that there are principles in our universe *which are true and could not be untrue* in any universe. Welcome.

----------


## desiresjab

> "Laws" (of physics or an science) are "theoretical principles deduced from the observation of facts".
> 
> Objects do not accelerate at 32 feet per second per second because of the law of gravity; the law exists because objects accelerate at that rate. Falling objects accelerated at that rate before the law existed, before language existed. "Laws" are linguistic creations.


The number line lables its own cardinality and ordinality at the same time. In concept, the numbers all exist at once, instead of coming into being sequentially. Two is, has been, and ever will be the successor of one. Before anyone was there to think of it, before the universe itself, no option was possible but to create a universe where two-ness succeeds one-ness, in the realm of pure abstraction, and not with any reference to time or space. Two-ness exceeds one-ness in a different abstraction called magnitude. It is space and time oriented language that has diffculty getting away from all such references. They are not necessary for the proposition to be seen.

----------


## desiresjab

> Yes.


Now I have some observations about pi and e, but other constants as well, and whether they exist. They are infinite, so technically cannot fully be produced, yet they normally play a part in the best descriptions we can make of everything from waves, to bust and boom cycles in animal populations to radioactive decay--virtually all scientific fields produce formulas involving these two numbers, in particular. They are not effecting radioactive decay, they merely express how it does it.

Other than a handful of transcendental constants, which arguably do not even exist, no other numbers are special. Seven is not related to nature in a widespread way. Neither is three, nor is one hundred and forty-one, or any other integer or fraction. Only these numbers exhibit that connection, and they all belong to a higher order of infinity. Rational numbers (now including the algebraic rationals) are infinite but countable. This means that Cantor devised a clever way of lableing them. The transcendentals are infinite, but not countable, as the rationals are. In other words, there is no strategy for lableing them all. Rather, Cantor was clever enough again to show that you could not do this. It was impossible. Transcendentals always resist complete labling of their species. It cannot be otherwise.

Only transcendental constants seem to have the special connection. An amazing fact. What are they? I don't know. They are little vortices that drill all the way to infinity. When one considers how much more numerous these fictitious creatures are than their rational cousins on the number line, maybe it is not surprising that they are all transcendental. Scatter a random handful of constants in a universe and they would all be transcendental numbers by the laws of probability. They are not physical but mathematical constants. They appear everywhere in mathematics. They are widespread in physics and every other science. Just what their deeper reality is, if they have one, mystifies me. I do not know why they relate or if other constants would replace them in a different universe. They are baffling.

----------


## Ecurb

> The number line lables its own cardinality and ordinality at the same time. In concept, the numbers all exist at once, instead of coming into being sequentially. Two is, has been, and ever will be the successor of one. Before anyone was there to think of it, before the universe itself, no option was possible but to create a universe where two-ness succeeds one-ness, in the realm of pure abstraction, and not with any reference to time or space. Two-ness exceeds one-ness in a different abstraction called magnitude. It is space and time oriented language that has diffculty getting away from all such references. They are not necessary for the proposition to be seen.


Clearly, math is different from physics in that it involves purely logical systems. All mathematical theories are merely restatements of the basic premises. I haven't thought about whether those premises are logically necessary or not, and I'm not sure. Nonetheless, math is a language, and its rules are linguistically determined. It would be logically possible (for example) to have a number system that didn't differentiate whole numbers from other numbers -- the number line might be continuous.

----------


## YesNo

> Now I have some observations about pi and e, but other constants as well, and whether they exist. They are infinite, so technically cannot fully be produced, yet they normally play a part in the best descriptions we can make of everything from waves, to bust and boom cycles in animal populations to radioactive decay--virtually all scientific fields produce formulas involving these two numbers, in particular. They are not effecting radioactive decay, they merely express how it does it.


They "exist" already in the sense that they can be defined. I don't understand your use of the word "exist".

It is not just that they cannot be fully produced, but if reality contains discontinuous quantum jumps, the use of these mathematical constants may break down at some decimal precision and fail to be useful in modeling reality.




> Other than a handful of transcendental constants, which arguably do not even exist, no other numbers are special. Seven is not related to nature in a widespread way. Neither is three, nor is one hundred and forty-one, or any other integer or fraction. Only these numbers exhibit that connection, and they all belong to a higher order of infinity. Rational numbers (now including the algebraic rationals) are infinite but countable. This means that Cantor devised a clever way of lableing them. The transcendentals are infinite, but not countable, as the rationals are. In other words, there is no strategy for lableing them all. Rather, Cantor was clever enough again to show that you could not do this. It was impossible. Transcendentals always resist complete labling of their species. It cannot be otherwise.


I don't understand why the other transcendental constants don't exist. Is it because there may be no way to compute or define them?

The golden mean, (1+sqrt(5))/2), might be another number, although algebraic, right up there with pi and e, or do you not consider it so?

Also I don't understand what the uncountable nature of the set of transcendental numbers has to do with this argument. It is the set of transcendental numbers as a whole that is uncountable. The number of elements in a subset of those numbers may well be countable or even finite. For example, the set of numbers, {pi,e}, containing only pi and e is countable. The number of elements in that set is even finite and equals 2 since there are only two numbers in that set.




> Only transcendental constants seem to have the special connection. An amazing fact. What are they? I don't know. They are little vortices that drill all the way to infinity. When one considers how much more numerous these fictitious creatures are than their rational cousins on the number line, maybe it is not surprising that they are all transcendental. Scatter a random handful of constants in a universe and they would all be transcendental numbers by the laws of probability. They are not physical but mathematical constants. They appear everywhere in mathematics. They are widespread in physics and every other science. Just what their deeper reality is, if they have one, mystifies me. I do not know why they relate or if other constants would replace them in a different universe. They are baffling.


What "special connection" are you talking about? I don't understand "little vortices that drill all the way to infinity". Don't forget at some level there are discontinuous quantum jumps in the quantum physics model of reality.

I agree that if you picked random numbers from the set of real numbers, the result would most likely be all transcendental numbers since there are many more of them. However, I don't know to what extent mathematical constants, which exist precisely through their definitions, imply that reality must contain those constants simply because they are used in physical models. This is the problem of confusing the model with reality.

----------


## desiresjab

Right all.

Just because a mathematical structure can be created does not mean any kind of physical manifestation of that system, such as a universe, is possible using it as a replacement for _two is the successor of one_. We can formulate p-adic numbers, but can they underlie a physical universe where two is the successor of one is no longer true. Can it work?

About the golden ratio formula. Yes, it is a mere algebraic irrational. I was thinking of it as a possible exception when I wrote the last post. But at this point, I do not believe it is as "strongly involved" as pi and e. The golden ratio is not a number naturally occurring in scientific research at every level, though just about anything in math can be shown to be related, even if remotely, to anything else by enough manipulation.

The mere fact that it comes from a countable set makes it suspect, probablistically, but its expectation is minutely north of zero, so it was not quite impossible that a constant of great importance which was non-transcendental might be among some finite number of them distributed in a universe. However, note with interest also that 2 raised to the golden ratio formula as a power, would be transcendental, by the Gelfond-Schneider theorem. I reserve judgement on this.

About Constants. When one measures the energy present in the vacuum of space, for instance, that is a different kind of constant, an actual physical constant based on measurement. This constant is never going to pop up everywhere in math and science like pi and e do. Pi and e are mathematical constants.

The limits of these numbers' ability to reflect reality and quantum jumps across scale in formulas is indeed a great question, and beyond me. Somewhere among the equations of quantum physics or their derivations are sure to be found the trusty trig functions, which are transcendental because of good old pi.

Mathematicians do some amazing things. I think there is an approved proof out there showing that one or the other of pi or e is more transcendental than the other. I forget which. I will have to look that up and get back to you.

----------


## YesNo

> Somewhere among the equations of quantum physics or their derivations are sure to be found the trusty trig functions, which are transcendental because of good old pi.


There is Euler's formula which relates e and sines and cosines. If you put pi in for x, you get Euler's identity. This relates e, pi and i together. 

Fourier analysis can approximate general functions with sums (superpositions) of trig functions. My positivist leanings make me doubt that reality actually contains these superpositions, for example the view that one might be able to use them to split reality into "many worlds", although they make a mathematical model of reality easier to work with.

----------


## desiresjab

> There is Euler's formula which relates e and sines and cosines. If you put pi in for x, you get Euler's identity. This relates e, pi and i together. 
> 
> Fourier analysis can approximate general functions with sums (superpositions) of trig functions. My positivist leanings make me doubt that reality actually contains these superpositions, for example the view that one might be able to use them to split reality into "many worlds", although they make a mathematical model of reality easier to work with.


Lie algebra is applied to infinitesimal transformations. This sounds like it could be relevant to the quantum jumps you spoke of earlier. I know that a particular brand of it is being used by at least one research group to investigate relations between consciousness and quantum theory, which automatically makes it of interest. In various algebraic structures some normal properties of arithmetic hold and others do not. The kind of object it is depends on which particular properties hold and which do not. Monoids, groupoids, sub groups, magmas, lattices _et al_, all merely pervert one or more of these properties or eliminate operators. Good old Abelian groups have all the properties of normal arithmetic, I believe, right up to commutivity. Everything else seems like a brand of perversion of Abelian groups to explore deeper and deeper complexity and explain more relations easily. But the non-Abelian structures work, too, and have plenty of applications in advanced research.

Maybe models from one of these structures will eventually be able to capture some laws of consciousness in quantum mechanical math, or even vice versa! Wouldn't that be something? One cannot help but think of the words of Yeats, since this is a literature forum: _Whatever flames upon the night/Man's own resinous heart has fed_. 

Are the laws of consciousness to be found in quantum mechanics, or the laws of quantum mechanics in consciousness?

----------


## desiresjab

The continuity of important transcendentals might be the most natural smoothing functions across quantum jumps, similar to Euler's Gamma function and the factorial function. The action all takes place in the complex system, not the lowly reals, these days. The great graphs like the Mandlebrot set, all come from complex number manipulations. Attempts to solve the Reimann hypothesis, for instance, the most important unsolved problem in prime number theory, take place in the complex domain. If it falls, other important problems will fall right behind it, yielding many conjectured and unproven results.

----------


## YesNo

I don't know what you mean by the "continuity of important transcendentals".

----------


## HCabret

I like when non-scientists talk about quantum mechanics like they have anything approaching an understanding of it. 

"I think I can safely say that nobody understands quantum mechanics" -Richard Feynman

----------


## YesNo

Yeah, it is not easy to understand. Nor is it easy to understand why it is so hard to understand, but if Feynman could not understand it either, I feel in good company.

----------


## HCabret

> Yeah, it is not easy to understand. Nor is it easy to understand why it is so hard to understand, but if Feynman could not understand it either, I feel in good company.


I am completely uncomfortable with my inability to understand quantum mechanics. I am, however, confident enough to admit that I have literally no understanding whatsoever of quantum mechanics. 

I've read Elegant Universe, and I barely understand the big picture basics of general relativity. At least enough to watch Interstellar.

----------


## desiresjab

> I don't know what you mean by the "continuity of important transcendentals".


I was clumsy with that. What I intended was that the continuing values in each decimal place of pi or another constant might turn out to provide best-fit bridges of continuity across quantum jump states. Speculation. Dreaming. I wouldn't put it past them, though. Something for a science fiction story.

----------


## desiresjab

> I am completely uncomfortable with my inability to understand quantum mechanics. I am, however, confident enough to admit that I have literally no understanding whatsoever of quantum mechanics. 
> 
> I've read Elegant Universe, and I barely understand the big picture basics of general relativity. At least enough to watch Interstellar.


I think one of the things that now makes physics so appealing to artistic types is its mystery. The quantum world is not the same old Isaac Newton block party. It is not even the Einstein block party. It is a wilder party than both.

----------


## desiresjab

If no universe is possible where two is not the successor of one, that would act as a constraint upon a creator of universes, would it not? Or would act as a constraint upon the random creation of universes through any process, if you prefer.

But does this even qualify as a constraint, since it does not, in fact, eliminate even one possible universe? It is as tautalogical as saying: _You may not create any universe which may not exist_.

Now the big question? How much of basic arithmetic _must be true_ in any universe we can conceive of--a universe of actual particles and physics, not just abstractions? Not enough is known about how particles come to exist in the first place to answer this authoritatively.

But even in our everyday world various algebraic structures have numerous and sometimes profound applications and implications, though they are not of our natural "home algebra." Each one of them defies axioms of our home arithmetic by tweaking just one or more deep properties, such as distribution across multiplication or association across addition, and letting the system run, so to speak. My belief is that the job is going to require all the tools of mathematics and likely some strains that are not invented yet.

Could a type of universe whose existence is impossible from our perspective ever make the leap from abstraction to reality? Does our inablity to imagine a universe make its existence impossible?

----------


## YesNo

> I am completely uncomfortable with my inability to understand quantum mechanics. I am, however, confident enough to admit that I have literally no understanding whatsoever of quantum mechanics. 
> 
> I've read Elegant Universe, and I barely understand the big picture basics of general relativity. At least enough to watch Interstellar.


I don't know much about it either. Someone posts something. I check it out further. Jim Baggott's "The Meaning of Quantum Theory" is a good summary of the issues. 

Quantum physics is not only for physicists as desiresjab mentioned earlier. It is not Newton's or Einstein's block party. The standard Copenhagen interpretation ties the hands of physicists with its positivism and leaves the interesting interpretations for those willing to speculate on what reality might actually be. It is now a philosopher's playground.

----------


## YesNo

> If no universe is possible where two is not the successor of one, that would act as a constraint upon a creator of universes, would it not? Or would act as a constraint upon the random creation of universes through any process, if you prefer.


Since our universe had a beginning and is finite, I assume there are other universes. However, I don't think the various universes were created randomly. That is the sort of speculation one can expect to hear from those pushed up against the wall with our universe having a beginning who do not want to admit that something or someone made a choice to start it. I am not interested in looking for constraints on whatever consciousness created ours. 




> But does this even qualify as a constraint, since it does not, in fact, eliminate even one possible universe? It is as tautalogical as saying: _You may not create any universe which may not exist_.


The more interesting question is does one need consciousness for our universe to exist at all. 




> Now the big question? How much of basic arithmetic _must be true_ in any universe we can conceive of--a universe of actual particles and physics, not just abstractions? Not enough is known about how particles come to exist in the first place to answer this authoritatively.
> 
> But even in our everyday world various algebraic structures have numerous and sometimes profound applications and implications, though they are not of our natural "home algebra." Each one of them defies axioms of our home arithmetic by tweaking just one or more deep properties, such as distribution across multiplication or association across addition, and letting the system run, so to speak. My belief is that the job is going to require all the tools of mathematics and likely some strains that are not invented yet.
> 
> Could a type of universe whose existence is impossible from our perspective ever make the leap from abstraction to reality? Does our inablity to imagine a universe make its existence impossible?


I think we already discussed this and I granted that mathematics is true in itself. It does not depend on the existence of a universe for it to be true. So the question of how much of basic arithmetic must be true in any universe is trivial. If the mathematics is logically consistent, it is true in any universe.

This sounds to me like you are confusing models with reality. Our universe is not reducible to mathematics. Some theories within mathematics may find use-value as approximations of reality. They allow us to make predictions through them more simply, but those models are only approximations. They are not reality. They did not create reality.

This is one of the reasons I brought up the Tarot earlier as well as economics or psychology or Elliott Wave technical analysis of the markets. These are all models which offer some predictive power, but they have little mathematics backing them up. My point: a useful model of reality does not even have to be mathematical.

----------


## YesNo

> If no universe is possible where two is not the successor of one...


It occurred to me this morning that two being the successor of one is a binary order relation on the set of integers. One could also define an opposite binary relation where two is less than one. Both of these relations would be possible in all universes, both real and imaginary, since they are based on definitions which are independent of those universes.

----------


## desiresjab

> It occurred to me this morning that two being the successor of one is a binary order relation on the set of integers. One could also define an opposite binary relation where two is less than one. Both of these relations would be possible in all universes, both real and imaginary, since they are based on definitions which are independent of those universes.


That would not interfere with duality. Duality is independent of the spelling of the numbers one uses to define it.

----------


## desiresjab

> It occurred to me this morning that two being the successor of one is a binary order relation on the set of integers. One could also define an opposite binary relation where two is less than one. Both of these relations would be possible in all universes, both real and imaginary, since they are based on definitions which are independent of those universes.


I don't think that would interfere with duality. Duality is independent of the spelling of the numbers one uses to define it.

It seems much more simple after all to ask: How much is it possible for home arithmetics to differ in other universes from our own? No doubt mathematics could be approached in different ways. Requiring rigorous proofs (our way) would be only one way to proceed. Does mathematics always end up in the same palce, no matter where it starts? It is not the same even among human cultures. Yet mere counting is at the heart of all mathematical beginnings that I know of. Must that itself be so? I find it difficult to imagine how a civilization might come upon the arithemetic of matrices first and then develop our normal arithemetic as a strange alternative. Could our fundamental arithemetic seem strange to them but operations on matrices seem completely normal and natural? Not sure how that could happen, or if it could. How would such a universe support that view?

----------


## desiresjab

In the end I go back to Shakespeare. _There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy_..

How so? I have even dreamed of universes I cannot imagine. Therefore all of these and more must exist, if Shakespeare is right.

----------


## YesNo

> I don't think that would interfere with duality. Duality is independent of the spelling of the numbers one uses to define it.


I don't understand what "duality" has to do with this. One could probably construct a countably infinite number of binary order relations on the integers. Start with 0 and then use the axiom of choice to pick the next integer making sure it does not agree with the one in a previous order relation. It looks like there might even be an uncountable number of such possible relations.




> It seems much more simple after all to ask: How much is it possible for home arithmetics to differ in other universes from our own?


I think we need a definition of "home arithmetics" to continue this. The term doesn't make sense to me.




> No doubt mathematics could be approached in different ways. Requiring rigorous proofs (our way) would be only one way to proceed.


My suspicion is that mathematics requires proofs or it is not mathematics. Our way would be the only way to do it.




> Does mathematics always end up in the same palce, no matter where it starts? It is not the same even among human cultures. Yet mere counting is at the heart of all mathematical beginnings that I know of. Must that itself be so? I find it difficult to imagine how a civilization might come upon the arithemetic of matrices first and then develop our normal arithemetic as a strange alternative. Could our fundamental arithemetic seem strange to them but operations on matrices seem completely normal and natural? Not sure how that could happen, or if it could. How would such a universe support that view?


I don't think the universe needs to support this except to allow consciousness to exist, but that brings me back to a previous question: is it possible to have a universe without consciousness?

Edit: Godel showed that mathematics couldn't be both complete and consistent. All we can hope for is that it is consistent. What that means to me is that if we start with the same assumptions we should reach the same conclusion, otherwise I would have to doubt whether mathematics can be consistent or not. I don't see how this depends on the kind of universe we are in.

----------


## Eupalinos

> is it possible to have a universe without consciousness?


Can you go into some more detail as to why this question comes up for you? Interesting discussion.

----------


## desiresjab

> I don't understand what "duality" has to do with this. One could probably construct a countably infinite number of binary order relations on the integers. Start with 0 and then use the axiom of choice to pick the next integer making sure it does not agree with the one in a previous order relation. It looks like there might even be an uncountable number of such possible relations.


Tricks do not mean it could support a universe. We are getting beyond this.




> I think we need a definition of "home arithmetics" to continue this. The term doesn't make sense to me.


Home arithemetic is exactly the fundamental principles of counting we learned in early grade school. That is the system which is normal and intuitive to us. Systems that ignore or counter-define any properties of that system are legitimate in some sense, but counter-intuitive to us. They came later in our development as mathematicians. 




> My suspicion is that mathematics requires proofs or it is not mathematics. Our way would be the only way to do it.


A famous mathemetician, maybe Barrow who wrote Pi In The Sky, envisioned a system of math from a civilization which did not require proofs for propositions to be defined as true. A quadrillion examples without a counter example appearing was rigorous enough for them. They were able to perform other operations with perfect confidence (for them) that no counter examples would ever crop up. They could assume, for instance, that there are infinite pairs of twin primes, and base further calculations on this "fact," from which they might glean other "facts." This mathematics would be philosophically different from ours, yet easily capable of existing. 




> I don't think the universe needs to support this except to allow consciousness to exist, but that brings me back to a previous question: is it possible to have a universe without consciousness?


A wonderful question. I am not sure how the machine hybrids which will supplant us are going to feel about that kind of question. Their own particular origins will not be in as much dispute for them as our own are to us.




> Edit: Godel showed that mathematics couldn't be both complete and consistent. All we can hope for is that it is consistent. What that means to me is that if we start with the same assumptions we should reach the same conclusion, otherwise I would have to doubt whether mathematics can be consistent or not. I don't see how this depends on the kind of universe we are in.


Anyone who cannot love Godel cannot love a mad scientist. The biggest stars of the mathematical universe often come out of nowhere. In the end it wasn't the Great Hilbert or the mighty Bertrand Russel or Peano who got the final say, it was freaky little Godel. He crushed the hopes and years of work from them all in a few pages. One of Einstein's famous quotes goes that he himself was irrelevant anymore but he still went in to work everyday at Princeton for the privilege of walking home with Godel. On Einstein's birthday (70th?) Godel presented him with a gift perhaps no other could have--a legitimate manipulation of Einstein's own equations pointing to a universe where time travel would be possible. Now that was a cool gift.

We might had better learn somewhere along the way that human intuition can be both a brilliant and a false guide to what is possible. We would hate to fall into a similar trap as poor Kant, whose legacy will forever bear the blemish of his postulation of Euclidian space as an _a priori_ truth. Kant thought this view of space was on unassailable grounds. Another view of space did not occur to him. Euclidian space was a _necessary_ truth, not a _contingent_ one. He rested his case.

By the time Kant was nearing his deathbed the teenaged Gauss had already recognized non-Euclidian space, as proven by entries in his notebooks when he was thirteen. He never published his discovery. When Gauss made _ripe but few_ his motto, he really meant it. If he had published even half of what he knew, instead of waiting for his students and later mathematicians to discover his secrets, mathematics might be fifty years ahead of where it is now. Wouldn't we love to know what frontiers it will have broken in fifty years? If only Gauss had possessed a few more of the gracious personality traits of Euler and fewer of the anal retentive ones of Newton.

Anyway, whether universes are possible which we can only imagine to be impossible, is very tricky if one lets it run. I feel the answer might lie at a higher meta-logic but not our current level. What we can imagine, expands forever like the arms of a graphed curve. We don't even know if such a curve has asymptotes. There goes that imagery bug again. Metaphors are rather hard to defend scientifically. 

Like yourself, I am happy to leave it for now that some axiomatic logical propositions are independent of the kind of universe we are in. It was an important first distinction to make. Otherwise it would keep clouding the issues later on in various contexts and guises. It still might anyway, but we have cleared the way of enough philosophical boulders for modern cosmology to begin without constant interference from the galleries of ourselves.

----------


## YesNo

> Tricks do not mean it could support a universe. We are getting beyond this.


What trick? Why does mathematics have to "support a universe"? What I am trying to probe is what I see as a confusion between a particular model (mathematics) and reality (whatever it is).




> Home arithemetic is exactly the fundamental principles of counting we learned in early grade school. That is the system which is normal and intuitive to us. Systems that ignore or counter-define any properties of that system are legitimate in some sense, but counter-intuitive to us. They came later in our development as mathematicians.


That doesn't mean our home mathematics could not be something else in the same universe.




> A famous mathemetician, maybe Barrow who wrote Pi In The Sky, envisioned a system of math from a civilization which did not require proofs for propositions to be defined as true. A quadrillion examples without a counter example appearing was rigorous enough for them. They were able to perform other operations with perfect confidence (for them) that no counter examples would ever crop up. They could assume, for instance, that there are infinite pairs of twin primes, and base further calculations on this "fact," from which they might glean other "facts." This mathematics would be philosophically different from ours, yet easily capable of existing.


That sounds like a physics rather than a mathematics.




> A wonderful question. I am not sure how the machine hybrids which will supplant us are going to feel about that kind of question. Their own particular origins will not be in as much dispute for them as our own are to us.


John Searle's "Chinese Room" argument has put an end to the AI dream.

----------


## YesNo

> Can you go into some more detail as to why this question comes up for you? Interesting discussion.


I don't think a universe can exist without consciousness. One of the reasons for thinking this is to examine what we mean by "agents". These would be parts of the universe that can make a choice. Agents have enough consciousness to make a choice. We would be examples agents. 

Agents are not totally free to act. Their choices can be predicted. For example, given a choice between vanilla and chocolate ice cream on a certain day the probability distribution of my choice might be 30% for vanilla and 70% for chocolate. Now consider an electron with its choice between spin up or spin down. That choice could also have a 30%-70% probability distribution. Based on this behavior, could the electron not also be considered an "agent" with enough "consciousness" to make a choice?

There are people who would claim that consciousness doesn't exist at all, being some kind of illusion of something still unknown, or it is an ephiphenomenon generated by unconscious matter that functions through determinism and randomness. Mathematics would be the model that patterns that determinism and randomness. But, given the uncertainty in quantum physics, is it possible for "unconscious matter" to even exist? If it is not, then no universe can exist without consciousness.

----------


## Eupalinos

This is not a theory of agency I had encountered before and it's intriguing. In this scheme is there a worthwhile distinction to make between a consciousness that thinks about its choices and one that has no potentiality of self-reflective thought? Or are the evolved forms of thought unimportant?

The word consciousness would seem to have attached to it through common usage 'awareness' -- is there evidence that awareness might be attributed to an electron? Maybe our perceived awareness is illusory? I'd be curious to learn more if you can point me in a certain direction of texts. (Also interested in your explanation.)

----------


## desiresjab

> What trick? Why does mathematics have to "support a universe"? What I am trying to probe is what I see as a confusion between a particular model (mathematics) and reality (whatever it is).
> 
> That doesn't mean our home mathematics could not be something else in the same universe.


I have to disagree. I think our home arithemetic has to be what it is in the universe we are aware of. Sure there have been primitive societies who only counted 1, 2, and went from there to _many_. That is still an abbreviation of fundamental counting. It contradicts it in no way. Beings in our universe should develop the natural way of counting before anything else. I absolutely cannot see any being or civilization developing matrix algebra first and coming to fundamental counting later, which would make fundamental counting, then, an alternative algebraic structure to those beings, and matrix algebra their home arithemetic. Maybe that is possible to say in words. But how could it happen with live beings in a universe? I say that it cannot happen in the universe as we currently understand it.

----------


## desiresjab

> I absolutely cannot see any being or civilization developing matrix algebra first and coming to fundamental counting later, which would make fundamental counting, then, an alternative algebraic structure to those beings, and matrix algebra their home arithemetic. Maybe that is possible to say in words. But how could it happen with live beings in a universe? I say that it cannot happen in the universe as we currently understand it.


But of course we now know again that we understand so little of our universe that the phrase _anything is possible_, seems apt from the point of view of pure wonderment. Still, from what is known, I do not see how it could happen.

----------


## HCabret

Chong: “One day I took some acid and played Black Sabbath at .78 speed.”
Cheech: “Yeah? And then what happened?”
Chong: “I saw… GOD!

----------


## desiresjab

> Chong: “One day I took some acid and played Black Sabbath at .78 speed.”
> Cheech: “Yeah? And then what happened?”
> Chong: “I saw… GOD!


It applies.

Most people are used to contemplating the negative philosophical implications of Godel's theorems. Godel proved that mathematics has infinite complexity, but that under one roof it can never resolve all contradictions or settle all disputes from its axioms. Theoretically, under the new set of axioms of a higher meta logic the old disputes could all be settled, but they would only give rise to new and more advanced disputes not decideable under that new axiomatic system. This process could go on ad infinitum, always allowing us greater understanding but never completing that understanding.

Infinite complexity cannot necessarily capture any possible reality, but maybe it could. Of which order is the infinite complexity of mathematics--Aleph nought or the continuum? Aleph nought could not give rise to all possibilities along the continuum. This is a conjecture which surely has to be true.

----------


## desiresjab

I like starting my universe with infinite mathematical complexity. A result of finite complexity would have been crushing.

It will probably now require some time to make an acceptable definition of consciousness. It is a whole family of functions or a gradient of one complex function. If we grant a mosquito consciousness along with ourselves in our scheme, we have to acknowledge vast differences in the two types, though we are stating that at one level of abstraction there is some property they hold in common which is more fundamental to an understanding of them than their differences.

If consciousness is a gradient from ameoba to mosquito to porpoise to man to..., then we are allowed the liberty of arbitrary demarcations along that gradient, such as the numbers on a number line, until at some point in the future we might actually know where the important points along that curve lie and what those points represent. An attempt at an intuitional arithemetic of consciousness. The naiive version.

I propose as minimal that undefined but understood level of consciousness necessary to catch one's self thinking as the qualification for what I call super-consciousness. Thinking about thinking is the critical threshhold I arbitrarily select as the beginning of true consciousness, we have named super consciousness. If you cannot think about what you are thinking about, then you are not super conscious, though indeed you are conscious and thinking.

Since it is only the super conscious beings that can consider themselves and formulate mathematical laws, if I must make a demarcation, I make it here.

----------


## YesNo

> This is not a theory of agency I had encountered before and it's intriguing. In this scheme is there a worthwhile distinction to make between a consciousness that thinks about its choices and one that has no potentiality of self-reflective thought? Or are the evolved forms of thought unimportant?


Yes, the consciousness implied in an electron because of the choice it makes between being spin up and spin down is very primitive compared to the consciousness that we enjoy. We don't even see it as conscious which I think is part of the motivation for dualism. Intuitively we view the world around us as composed of real, animate agents such as people, pets and so on as well as inanimate reality such as sidewalks, stones, water and so on. The inanimate part looks as if they are not agents at all. Hence the dualistic split between agents and supposed non-agents.

This is only one way to approach the question of whether a universe is possible without consciousness. I am primarily motivated with finding a justification for Thomas Nagel's panpsychism (see his essay in _Mortal Questions_) which implies that consciousness would have to permeate the universe to the lowest levels in order for our consciousness to eventually appear at all. I see this as a justification for reductionism given that consciousness (ours) exists.

What I am trying on here is philosophy, not physics. I limit physics to positivism as the Copenhagen interpretation does. When physicists start doing philosophy it seems to me they get intuitively caught up with a mathematics mysticism since they are used to using mathematical models. That is what I want to avoid. This leads them to ideas of "determinism" and "randomness" neither of which I think are part of reality, but are part of their models. It also leads some of them to project mathematical structures such as the superpositions of trig functions in the Schrodinger's wave function onto reality with each superposition being one of the "many worlds". My view is that reality is more interesting than these mathematical models.




> The word consciousness would seem to have attached to it through common usage 'awareness' -- is there evidence that awareness might be attributed to an electron? Maybe our perceived awareness is illusory? I'd be curious to learn more if you can point me in a certain direction of texts. (Also interested in your explanation.)


I don't think the electron is aware only conscious enough to make a choice, but I don't know. It has a disposition and it appears to make "choices" when subjected to experiments. This leads to a different view of causality from the ideas we typically assume today which can be traced to David Hume. One book that I have found fascinating is Stephen Mumford and Rani Lill Anjum's _Causality: A Very Short Introduction_.

----------


## durlabh

Multiverse. ( Theory of many universes)

Quantum theorys application results predict the presence of multi universes, overlapping our universe-- parallel universes existing with our universe, un-detective but affecting human mind & destiny. To us these look strange, with their other worldly properties of many dimensions, physical properties without postulates of atoms & our kind of energy. All our scientific laws of time, gravity, electromagnetic forces my not apply there.
Not only physical but mental realities may have different properties over there. Newtonian laws, Einstein conclusions, Bohrs theories are true only in a limited field but may be unable to hold in other universe, ushering in a much wider context in physical structures.
Any laws of ours put reality under deterministic mode as to satisfy scientists egos and make it something static and dead but Life is more dynamic, with movement and change. Scientific theories are based on laws of logic as proven by their long winded equations and cover a very limited field into dead entity.
Moreover, Einsteinian theories predicted much more energy in the universe then in that of visible matter and detectable energy. There must be more matter that we can detect as to support time and space & which is termed as The Black matter/energies and according to their estimate, these can count for up to 90 % in our universe. We are only dealing only with 10% of our visible universe. Our established scientific laws and properties may amount to very tiny portion of our intrinsic universe.
In Buddhist scriptures it is mentioned that Buddha visited three thousand other universes to give his sermons. Guru Nanak stated that there are stars upon stars, planets upon planets, universes upon universes and human mind gets tired of applying its intellect in thinking about.
Both science and religion indicate that reality is far vaster than our logical minds can grasp.

Durlabh Singh© 2015

----------


## YesNo

> Any laws of ours put reality under deterministic mode as to satisfy scientists egos and make it something static and dead but Life is more dynamic, with movement and change.


I hadn't thought of ego being an explanation for the love of determinism.




> Moreover, Einsteinian theories predicted much more energy in the universe then in that of visible matter and detectable energy. There must be more matter that we can detect as to support time and space & which is termed as The Black matter/energies and according to their estimate, these can count for up to 90 % in our universe. We are only dealing only with 10% of our visible universe. Our established scientific laws and properties may amount to very tiny portion of our intrinsic universe.


I wonder if there is any dark matter or energy. Maybe we just need to revise the laws of physics or find a way to take better measurements.

----------


## desiresjab

Some scientific theories have names that are irrestible to the public. Relativity, Chaos theory, String theory, Disaster theory and now Mulit-verse. Is there a human being alive who does not want to believe this? I want to believe in multiverses. But my own standards prevent me from subscribing to theories with no empirical evidence.

As I undertrstand it multiverses were postulated because our cosmological constant was so finely tuned that it seemed to impliy a deliberate action on the part of an intelligence. Multiverses were posited to provide an end run around the notion of a designer. Our strange cosomological constant is not so strange if there are up to 10^500 other universes lurking somewhere out there.

I think there is no evidence for multiverses other than this flimsy excuse for a theory. I wish it were otherwise. I suppose I could be convinced.

----------


## desiresjab

I am no expert on multiverse theory, I want to make that clear. _As far as_ I have examined it, I love it and it seems plausible in a highly abstract way. Evidence, or at least more rational theorizing on the subject is something I would like to see. But most of what one reads even from established minds, seems much closer to populist extrapolations gone wild than science.

Perhaps we are witnessing science molting from its old skin. Or better, the catepillar is transforming into a butterfly. Is science becoming more verbal as its discoveries grow ever more abstract? Somewhere along the line the math has to work out. But the public has an appetite for _suggestive science_, which is a new term coined right now to mean wild and fun extrapolations on serious theories with suggestive names. Like a comet, science has a tail which is most of what we see.

----------


## YesNo

> Some scientific theories have names that are irrestible to the public. Relativity, Chaos theory, String theory, Disaster theory and now Mulit-verse. Is there a human being alive who does not want to believe this? I want to believe in multiverses. But my own standards prevent me from subscribing to theories with no empirical evidence.
> 
> As I undertrstand it multiverses were postulated because our cosmological constant was so finely tuned that it seemed to impliy a deliberate action on the part of an intelligence. Multiverses were posited to provide an end run around the notion of a designer. Our strange cosomological constant is not so strange if there are up to 10^500 other universes lurking somewhere out there.
> 
> I think there is no evidence for multiverses other than this flimsy excuse for a theory. I wish it were otherwise. I suppose I could be convinced.


One gets the likelihood of a multiverse as soon as one accepts that our present universe had a beginning. If it happened once it probably happened many times before. Hence a multiverse.

However, that argument implies those other universes would all be very similar to our own since our universe is all we have evidence for.

If one also needed, because of one's metaphysical assumptions, to deny transcendent consciousness (or ultimately to trash consciousness of any sort), then finding evidence for the beginning of our universe results in cognitive dissonance. In order to keep one's metaphysics intact, one now has to scramble to explain where all the stuff of the universe came from. One can no longer assume our universe has always been there. Did all this stuff come "from nothing"? That idea would make things even worse, but it looks as if not only did all the stuff in our universe but also space and time itself had a beginning. 

One way to deal with cognitive dissonance is to explain it away adequately enough so one can forget the dissonance and be happy again. One common way this is done is to appeal to "randomness" and allow the multiverse to contain all kinds of strange universes. Why does this help? With randomness some people think they can rationally continue pretending that consciousness does not exist. 

Because they are the result of a metaphysical angst, I don't accept randomness arguments. So, I would agree with you, desiresjab, that there is no evidence for a _random_ multiverse, but I do think the big bang is itself evidence that there is a _non-random_ multiverse, basically a multiverse of universes much like our own.

----------


## desiresjab

> One gets the likelihood of a multiverse as soon as one accepts that our present universe had a beginning. If it happened once it probably happened many times before. Hence a multiverse.
> 
> However, that argument assumes those universes would all be rather similar, that is, a multiverse of universes able to support life.
> 
> If one also needed, because of one's metaphysical assumptions, to deny transcendent consciousness (or ultimately consciousness of any sort), then finding evidence for the beginning of our universe results in cognitive dissonance. In order to keep one's metaphysics intact, one now has to scramble to explain where all the stuff of the universe came from. One can no longer assume our universe has always been there. Did all this stuff come "from nothing"? That idea would make things even worse, but it looks as if not only did all the stuff but also space and time itself had a beginning. 
> 
> One way to deal with cognitive dissonance is to find a way to explain it away. One common way this is done is to appeal to "randomness" and allow the multiverse to contain all kinds of strange universes. Why is that necessary? One has to find a way to make sure that a consciousness choice was never made. 
> 
> Because of this angst that motivates an appeal to randomness, I don't accept a randomness argument at all. So, I would agree with you, desiresjab, that there is no evidence for a random multiverse, but I do think the big bang is itself evidence that there is a _non-random_ multiverse, basically a multiverse of universes much like our own.


I accept randomness as the best tool at present to help capture certain properties of some phenomena. Consciousness is easier for me to accept as a universal constant because it can be defined so many ways. 

The big bang compressed not only space and time but information as well. That which unfolded was seeded with an incredible richness of emergent properties. It is this wealth of emergent properties which pulls me toward the rule of consciousness. I probably believe in intelligent design, but I think it is a useless concept to science. It is like saying _look for patterns_, and science shouts back, _what do you think we have been doing for all of these centuries_? I think Dawkins made a doc called Take Back Intelligent Design. 

I might believe that existence has infinite emergent properties enfolded within it; that these emergent properties over time stack like exponents and rapidly or slowly differentiate one emergent line from another; that evolution is only an expression of a more encompassing rule of emergent properties we have not yet realized into an equation; that without this infinite enfolding of emergent richness, randomenss and thirteen point seven billion years are too little for all that has come to be; or maybe all of that is only philosophical rationalizing of how I hope the multiverse operates.

We have already noted that mathematics has identified itself by way of proof as possessing infinite complexity but unable to store it under one roof. That's how it is. It takes infinite roofs to store infinite complexity. Unfortunately, it means there is no mathematical theory of everything--no TOE in math. I really don't know what that implies for the relationship of math and physics in the future, but I think equations will signal revolutions in the foreseeable future as they have in the recent past from Newton to Maxwell to Einstein. When the paradigm shift comes that revolutionizes thinking, an equation or mathematical structure will be close by. That is comforting.

I do not have a problem sharing with meaningless universes in addition to those packed with information and emergent properties that manage to unfold. We have to live with certain philosophical paradoxes to explore our existence. Philosophy is better at dealing with those than science is. 

If the human race manages to survive without continually destroying its own knowledge, I do not believe in any limit to our increase of understanding, as our consciousness is, so far, the ultimate product of evolution under enfoldment, with infinite potential remaining.

Any God is way back there, not upfront in your face, as religions have it. The packing of infinite enfoldment means a distant, abstract intelligence to me, if any. Intelligence and consciousness are only poorly defined concepts. We will continue to refine our assessments of them. 

I am not bothered that some people attribute all that exists to pure randomness and others to intelligence, for intelligence is vague and sure to be redefined to fit our philosophical needs, and even randomness might eventually be redefined as something we only partially understood in our past. Is it possible that randomness itself could be the elusive intelligence we are seeking, the engine and creator of infinite enfoldment? Yeah, that is possible too. Most of all, I would like the ultimate answer to include me. Whether the afterlife is genrated randomly or through intelligent enfoldment, is of little concern compared to my concern for my continued existence. Feelings like this are why the multiverse theory is so appealing. The _suggestive science_ is without peer.

I see absolutely no reason for me and my stomach bacteria to be here right now other than enfoldment. If enfoldment is infinite, that means every kind of universe unfolds. I must accept the paradox that even impossible universes, then, would have to unfold. There will always be paradoxes because of our hierarchal position in the metalogical structure which is infinite. But many paradoxes which stumped both the ancients and our mere elders have been comfortably solved so that they no longer pester our reason in the least. Olbers and Zeno to name but two famous examples. This is meta science, this is meta logic in action as it unfolds, climbing its own rungs, turning paradox into understanding. We are at a higher level of meta science and meta logic than our elders, otherwise we could not solve so many of their paradoxes so truly.

----------


## YesNo

> I accept randomness as the best tool at present to help capture certain properties of some phenomena. Consciousness is easier for me to accept as a universal constant because it can be defined so many ways.


There are at least three things that I don't accept: (1) determinism, (2) randomness and (3) physical constants. The reason is because they are all properties of mathematical models and quantum physics has undermined the first two. Within the models we can have determinism, randomness and constants. They make the models simpler to use for calculations and this is where their use value lies.




> The big bang compressed not only space and time but information as well. That which unfolded was seeded with an incredible richness of emergent properties. It is this wealth of emergent properties which pulls me toward the rule of consciousness. I probably believe in intelligent design, but I think it is a useless concept to science. It is like saying _look for patterns_, and science shouts back, _what do you think we have been doing for all of these centuries_? I think Dawkins made a doc called Take Back Intelligent Design.


I actually don't believe in "intelligent design".  It assumes the universe is a deterministic machine that needs a designer to build it, wind it up and let it run down. I don't see the universe as a machine. This comes from my rejection of mathematical models as proxies for reality. 

This rejection of an intelligent designer does not mean that I reject "cosmic consciousness", "transcendent consciousness" or other concepts of "God". I don't think the universe can exist right here right now without such transcendent reality sustaining it, but this has nothing to do with "design".

When theists argued for a conscious designer in the 19th century they were trying to make an argument that those who believed in mathematical determinism would be able to understand. It was a mistake to go down the road of determinism as far as they did.




> I might believe that existence has infinite emergent properties enfolded within it; that these emergent properties over time stack like exponents and rapidly or slowly differentiate one emergent line from another; that evolution is only an expression of a more encompassing rule of emergent properties we have not yet realized into an equation; that without this infinite enfolding of emergent richness, randomenss and thirteen point seven billion years are too little for all that has come to be; or maybe all of that is only philosophical rationalizing of how I hope the multiverse operates.


The concept of "emergent properties" is also important. At the moment, I don't believe in them. There are strong and a weak emergent property theories. The strong form would allow consciousness to emerge from unconsciousness. I think this is ridiculous, but I will leave it to those who promote ideas like panpsychism, such as Thomas Nagel, to argue against it. 

I am also opposed to weak emergent properties which people who support panpsychism may not be. This means that I favor a non-reductionist view of reality (although reductionist models may have use-value in simplifying calculations leading to useful predictions). 

If one believes in even a weak form of emergent properties one then needs a way for this emergence to occur. I agree with you that "randomness and thirteen point seven billion years are too little for all that has come to be". How more complicated forms arose from simpler forms needs to be explained beyond a simple faith that it must have happened that way. One of the reasons I like Rupert Sheldrake's morphic fields is he tries to provide such an explanation using modern field concepts. Maybe Sheldrake will convince me that weak emergent properties are possible.




> We have already noted that mathematics has identified itself by way of proof as possessing infinite complexity but unable to store it under one roof. That's how it is. It takes infinite roofs to store infinite complexity. Unfortunately, it means there is no mathematical theory of everything--no TOE in math. I really don't know what that implies for the relationship of math and physics in the future, but I think equations will signal revolutions in the foreseeable future as they have in the recent past from Newton to Maxwell to Einstein. When the paradigm shift comes that revolutionizes thinking, an equation or mathematical structure will be close by. That is comforting.


I agree.




> I do not have a problem sharing with meaningless universes in addition to those packed with information and emergent properties that manage to unfold. We have to live with certain philosophical paradoxes to explore our existence. Philosophy is better at dealing with those than science is. 
> 
> If the human race manages to survive without continually destroying its own knowledge, I do not believe in any limit to our increase of understanding, as our consciousness is, so far, the ultimate product of evolution under enfoldment, with infinite potential remaining.
> 
> Any God is way back there, not upfront in your face, as religions have it. The packing of infinite enfoldment means a distant, abstract intelligence to me, if any. Intelligence and consciousness are only poorly defined concepts. We will continue to refine our assessments of them.


I like the way you describe God as not currently being "upfront in your face". I have no religion to promote, and some specific religions annoy me, but if transcendent consciousness really does sustain the universe then not seeing that transcendent consciousness "upfront in your face" may be a sign that the theories one has about reality are wrong.

The reason I don't think there are meaningless universes is because I don't think a universe can exist--at all--without consciousness. In other words, there is no unconscious matter out there out of which a universe could be constructed or designed. That means there is nothing out of which one can construct a meaningless universe.




> I am not bothered that some people attribute all that exists to pure randomness and others to intelligence, for intelligence is vague and sure to be redefined to fit our philosophical needs, and even randomness might eventually be redefined as something we only partially understood in our past. Is it possible that randomness itself could be the elusive intelligence we are seeking, the engine and creator of infinite enfoldment? Yeah, that is possible too. Most of all, I would like the ultimate answer to include me. Whether the afterlife is genrated randomly or through intelligent enfoldment, is of little concern compared to my concern for my continued existence. Feelings like this are why the multiverse theory is so appealing. The _suggestive science_ is without peer.


Randomness seems to me to be one cognitive dissonance response to uncertainty. It allows one to continue pretending that things can change without consciousness being involved. However, the uncertainty of quantum physics need not have a uniform distribution. Therefore, generally that uncertainty is not random. 




> I see absolutely no reason for me and my stomach bacteria to be here right now other than enfoldment. If enfoldment is infinite, that means every kind of universe unfolds. I must accept the paradox that even impossible universes, then, would have to unfold. There will always be paradoxes because of our hierarchal position in the metalogical structure which is infinite. But many paradoxes which stumped both the ancients and our mere elders have been comfortably solved so that they no longer pester our reason in the least. Olbers and Zeno to name but two famous examples. This is meta science, this is meta logic in action as it unfolds, climbing its own rungs, turning paradox into understanding. We are at a higher level of meta science and meta logic than our elders, otherwise we could not solve so many of their paradoxes so truly.


Olber's paradox was confirmed with evidence supporting the big bang. What Olber's paradox reminds us today is that if the big bang did not occur and if the universe were infinite, then life could not exist.

----------


## desiresjab

> There are at least three things that I don't accept: (1) determinism, (2) randomness and (3) physical constants. The reason is because they are all properties of mathematical models and quantum physics has undermined the first two. Within the models we can have determinism, randomness and constants. They make the models simpler to use for calculations and this is where their use value lies.
> 
> 
> 
> I actually don't believe in "intelligent design". It assumes the universe is a deterministic machine that needs a designer to build it, wind it up and let it run down. I don't see the universe as a machine. This comes from my rejection of mathematical models as proxies for reality. 
> 
> This rejection of an intelligent designer does not mean that I reject "cosmic consciousness", "transcendent consciousness" or other concepts of "God". I don't think the universe can exist right here right now without such transcendent reality sustaining it, but this has nothing to do with "design".
> 
> When theists argued for a conscious designer in the 19th century they were trying to make an argument that those who believed in mathematical determinism would be able to understand. It was a mistake to go down the road of determinism as far as they did.
> ...


I would not come on and defend God. Like I said, maybe I believe in a creator. But if I do, it is not the loudmouth of holy texts.

To me intelligent design does not mean determinism must follow. A smart enough creator could certainly build a non-deterministic machine.

I tend to feel that a less than ultimate being may be directly responsible for our super consciousness but not for the universe itself.

I do not say that the explosion of super consciousness several hundred thousand years ago in humans could not be produced in a fully natural way, with the help of natural selection and some emergent properties, of course. But it seems fully reasonable that we could have been tampered with to become super conscious.

I have been a fan of Sheldrake's morphic fields for a while. I just wish he had more to say about them. I suppose he can't because they are pure speculation and philosophy. Haven't looked at him for a while. I doubt he is within a light year of a mathematical model. You would not frown on such a model. It would signal progress. Rupert is being more of a poet than a scientist.

In spite of your distrust of mathematical models, you understand that an equation is what it takes to shake the world. Yet some of the smartest guys in the world are occultists these days. They have extreme mathematical tools. Sheldrake is an example and Brian Josephson another. Brilliant men looking for the next paradigm shift. Peering under stones their elders would not have disturbed. Neither seems to have made a dent in parapsychology, one of their chosen inquiries.

----------


## Iain Sparrow

> I would not come on and defend God. Like I said, maybe I believe in a creator. But if I do, it is not the loudmouth of holy texts.
> 
> To me intelligent design does not mean determinism must follow. A smart enough creator could certainly build a non-deterministic machine.
> 
> I tend to feel that a less than ultimate being may be directly responsible for our super consciousness but not for the universe itself.
> 
> I do not say that the explosion of super consciousness several hundred thousand years ago in humans could not be produced in a fully natural way, with the help of natural selection and some emergent properties, of course. But it seems fully reasonable that we could have been tampered with to become super conscious.


We do not have super consciousness, merely the illusion of super consciousness.

_There is research showing that the brain has an on/off switch that triggers unconsciousness. Mohamad Koubeissi at the George Washington University in Washington DC and his colleagues described a way to switch off consciousness by electrically stimulating a part of the brain called the claustrum.

The discovery came while the researchers were studying a woman who has epilepsy. During a procedure, they used deep brain electrodes to record signals from different parts of her brain in order to determine where here seizures were originating. One electrode was place next to the claustrum, a thin, sheet-like structure underneath the neocortex. Although this area has never been electrically stimulated before, it had been implicated in the past as a possible control center for consciousness by neuroscientist Francis Crick, who identified the structure of DNA, and his colleague Christof Koch of the Allen Institute for Brain Science in Seattle.

Koubeissi and his team found that Crick and Koch might have been on to something. When they stimulated the area with electrical impulses from the brain electrodes, the woman stopped reading, stared blankly into space and didnt respond to auditory or visual commands. Her breathing slowed as well. She had lost consciousness. When the scientists turned off the electrical stimuli, she immediately regained consciousness with no memory of blanking out. Additional attempts were tried over two days and each time, the same thing happened._

Consciousness is a state of matter, governed by the same physical laws as everything else.
Nothing mystical about it, no need for a super being tinkering with our DNA.

----------


## desiresjab

> We do not have super consciousness, merely the illusion of super consciousness.
> 
> _There is research showing that the brain has an on/off switch that triggers unconsciousness. Mohamad Koubeissi at the George Washington University in Washington DC and his colleagues described a way to switch off consciousness by electrically stimulating a part of the brain called the claustrum.
> 
> The discovery came while the researchers were studying a woman who has epilepsy. During a procedure, they used deep brain electrodes to record signals from different parts of her brain in order to determine where here seizures were originating. One electrode was place next to the claustrum, a thin, sheet-like structure underneath the neocortex. Although this area has never been electrically stimulated before, it had been implicated in the past as a possible control center for consciousness by neuroscientist Francis Crick, who identified the structure of DNA, and his colleague Christof Koch of the Allen Institute for Brain Science in Seattle.
> 
> Koubeissi and his team found that Crick and Koch might have been on to something. When they stimulated the area with electrical impulses from the brain electrodes, the woman stopped reading, stared blankly into space and didn’t respond to auditory or visual commands. Her breathing slowed as well. She had lost consciousness. When the scientists turned off the electrical stimuli, she immediately regained consciousness with no memory of blanking out. Additional attempts were tried over two days and each time, the same thing happened._
> 
> Consciousness is a state of matter, governed by the same physical laws as everything else.
> Nothing mystical about it, no need for a super being tinkering with our DNA.


I have merely made a demarcation in consciousness that is the ability to catch one's self thinking. That is all it takes for super consciousness, as I have defined it.

Whether anything mystical is happening is not a discussion for me. I am sure we do not know the proper meanings of natural or supernatural. My approach to cosmology is through science and math. Sometimes I like to take the ball and run for a ways toward one goal or the other with extrapolations, but in the end I always come out an agnostic on the fifty yard line. It is the only reasonable place I see. Socrates mentioned that any strong belief on these matters is presumptuous. I took that to heart long ago.

I have no problems with intelligent design or random evolution until someone makes claims of _knowing_ the answers or of _knowing_ certain things I do not believe are knowable. Then I need sharable proof.

----------


## YesNo

> I would not come on and defend God. Like I said, maybe I believe in a creator. But if I do, it is not the loudmouth of holy texts.


I don't have any sacred texts. Perhaps all texts are sacred.




> To me intelligent design does not mean determinism must follow. A smart enough creator could certainly build a non-deterministic machine.


I might have the history of ideas wrong, but I think intelligent design can be traced back to Paley's argument about finding a watch (a deterministic mechanism) and then assuming there must be a watchmaker. This idea probably goes back to the 18th century or earlier as well. 

It seems to me that theists some centuries ago were predominately determinists and dualism was the way they handled the cognitive dissonance this ultimately deistic perspective awoke. My problem with that deistic view is the assumption that the universe is deterministic (or random). This I think comes from a belief that mathematical models ("laws") _are_ reality. They are just models. It is like saying the road you are driving on _is_ the picture on your GPS app and therefore must have all the software components in it that are used by the app to display the picture.

A "non-deterministic machine" would be one with unconscious randomness involved in it. Determinism and randomness are two sides of the same unconsciousness coin. Neither are real, but they do have use-value in simplifying quantitative predictions about reality.




> I tend to feel that a less than ultimate being may be directly responsible for our super consciousness but not for the universe itself.


This sounds like some form of dualism. Dualism would be a belief in the existence of unconscious matter along with a belief in the existence of consciousness. 

Iain Sparrow's post provides one challenge to dualism. His post is based on a belief in materialism, that is, a belief in the existence of unconscious material stuff, which I think quantum physics undermined. But then, I'm an idealist, not a materialist. 

To be a materialist after quantum physics, one would have to take either some sort of superdeterminism or a many worlds approach to reality. Either of these general approaches seems more absurd to me, besides having no empirical evidence to back them up, than simply giving consciousness its due for which we do have empirical evidence.




> I do not say that the explosion of super consciousness several hundred thousand years ago in humans could not be produced in a fully natural way, with the help of natural selection and some emergent properties, of course. But it seems fully reasonable that we could have been tampered with to become super conscious.


I agree that the existence of consciousness outside of its manifestation as matter is possible, just as light can exist without acquiring mass. What I don't think is possible is the existence of matter/mass without consciousness. It would be like having matter without energy.

I think evolution proceeded the way that Niles Eldredge described through the process of punctuated equilibria. That would be a natural way using natural selection. I think he is right in recognizing the existence of things like "species" characterized by their stable DNA and from which other species form through geographic isolation. I am unclear how more complicated forms emerge from less complicated forms through this process alone, unless they don't actually "emerge" but are part of some pre-existing "field" properties or a participation in consciousness all along just waiting for their opportunity to manifest. 




> I have been a fan of Sheldrake's morphic fields for a while. I just wish he had more to say about them. I suppose he can't because they are pure speculation and philosophy. Haven't looked at him for a while. I doubt he is within a light year of a mathematical model. You would not frown on such a model. It would signal progress. Rupert is being more of a poet than a scientist.


I don't know much about Sheldrake except for a few books I have read. It is from him that I began doubting the existence of physical constants. I don't think physical constants precise to arbitrary decimal places make much sense given that quantum energy changes are not able to be that precise. However, without those arbitrarily precise physical constants, the determinism implied in mathematical models is not supported in reality.




> In spite of your distrust of mathematical models, you understand that an equation is what it takes to shake the world. Yet some of the smartest guys in the world are occultists these days. They have extreme mathematical tools. Sheldrake is an example and Brian Josephson another. Brilliant men looking for the next paradigm shift. Peering under stones their elders would not have disturbed. Neither seems to have made a dent in parapsychology, one of their chosen inquiries.


A paradigm shift is a cultural change. It is very close to the pattern of change of punctuated equilibria that Eldredge and Gould presented some decades ago. I think Dean Radin, among others, have adequately demonstrated that paranormal phenomena are real. What will shake the world? Maybe recognizing mathematics for what it is--a model, nothing more.

----------


## Iain Sparrow

> Iain Sparrow's post provides one challenge to dualism. His post is based on a belief in materialism, that is, a belief in the existence of unconscious material stuff, which I think quantum physics undermined. But then, I'm an idealist, not a materialist. 
> 
> To be a materialist after quantum physics, one would have to take either some sort of superdeterminism or a many worlds approach to reality. Either of these general approaches seems more absurd to me, besides having no empirical evidence to back them up, than simply giving consciousness its due for which we do have empirical evidence..


This all goes to the notion that we Humans are special... in fact all the evidence, from Copernicus' observations to Relativity, Quantum Theory and Cosmic Inflation suggests we are in fact Homo-Sapiens, that is we are just another beast on a planet that can support organic life orbiting an unexceptional star. Evolution has endowed us with a level of consciousness; in that, we are not alone as other beasts likewise have a level of _human-like_ consciousness... http://us.whales.org/blog/2012/08/we...er-species-are

----------


## desiresjab

> This all goes to the notion that we Humans are special... in fact all the evidence, from Copernicus' observations to Relativity, Quantum Theory and Cosmic Inflation suggests we are in fact Homo-Sapiens, that is we are just another beast on a planet that can support organic life orbiting an unexceptional star. Evolution has endowed us with a level of consciousness; in that, we are not alone as other beasts likewise have a level of _human-like_ consciousness... http://us.whales.org/blog/2012/08/we...er-species-are


Exactly. What we might call that the standard model of cosmic evolution. 

We were not special, now we are. What makes us special is that we can catch ourselves thinking and have realized our predicament. We alone have realized our predicament. Our species locates and defines problems, then attempts to solve them. As a species, we do not give up when it comes to problem solving.

Our problem since we realized it has always been our mortality. Our social problems are minor affairs. Our problem is and always has been death. Ancient monarchs declared themselves immortal in an attempt to transcend mortality. They consulted seers and soothsayers and witches--anything to live forever. Next to immediate survivial, immortal survival since the ignition of our super consciousness, has always been our main racial preoccupation. 

So of course that is the prize offered to mortal man by religion. No surprise there. Most of the introspection and exploration initiated in brains too big for mere survival are attempts to transcend death. We forget the origins of why we cheer on science--how to transcend death. That is what has mainly occupied us besides survival and luxury since the ignition of super consciousness.

No one cares if there is a God or not--they care about living forever. An afterlife without any God at all is just as desirable, if not more so, than a cosmos with a God. Most religious people would be afraid to admit this. A universe with a built-in afterlife, owing its existence to no more than factors of an expanded standard model. A universe where startling emergent properties arise to create--the way they created existence out of nothing, life out of matter, consciousness out of life, super consciousness out of consciousness, and finally, perhaps, an afterlife out of super consciousness--our transcendence of death.

We do not give up. If there is no afterlife to find, we will build one, and they will come. This would fulfill the ancient promise of all religions, but no one would care at that point.

I would prefer the built-in afterlife to one we constructed ourselves, since it is more likely to include my rotting bones and be richer.

----------


## desiresjab

Is actual randomness operant in the universe? What is randomness? We turn to mathematical thinking again. A sequence is random when there is no shorter way to store and produce it (such as compressed in a formula) than to list it element by element. That is why we cannot program actual randomness into a computer--because if we can program it, it ain't random, since there should be no such formula or algorithm or we end by contradicting our definition. We get as close as we can with complex pantomiming formulas called psuedo-randomness. These are close enough for human sensibilities, but within the range of science and other computers to exploit. Our psuedo-random formulas would be helpless against a quantum computer playing blackjack.

Randomness is a concept. Unlike the concept of fundamental counting, randomness may not be an unassailable concept. A universe without it is not unimaginable, but easily imaginable. Leaving it out of an imagined universe fails to make it unimaginable to me, like defying _two follows one_ does.

That is one view. Nature never produces a perfect circle either, so why do we need to insist that it manifest perfect, pure randomness? Is not the motion of excited gases close enough for us? Doesn't it make more sense at this point to attribute the chaotic 100,000 year journey of a photon from the interior of our sun to its corona to random forces of a nuclear traffic jam delaying it rather than something else?

That is the other view. I think randomness is only as unreal as pi is. And that never stopped pi. I visualize it as an asymptote of pure randomness which can be approached but never realized--Tantalus reaches for the fruit but it recedes just beyond his grasp. In some universe "nature" could get as close to random as you please like a limit in calculus without ever being purely random. I suppose that belief in a consciousness among elementary particles would have to exist in the vanishing remainder ignored by those limits.

The discussion of whether a universe can exist without consciousness, philosophicaly depends on definitions. I do not believe the human race is even close to defining consciousness properly. Indeed the best models may turn out to be those that postulate a limited form of consciousness in some elementary particles. If that is where the math points, that is where we will go. Even though consciousness-postulating models might work, that would still be, of course, only indirect evidence for the existence of consciousness in particles, since our models are artifacts of varying trustworthiness, especially in unknown waters.

Can we vanish either randomness or consciousnesss entirely away cosmologically or conceptually? I say maybe not as easily as I seemed to above. Furthermore, how unimaginable is a universe where asymptotical versions of randomness and consciousness interplay at a fundamental level we have not yet discovered? Or at least that the best models are ones that include both factors?

I predict an Einstein or Newton level transformation of scientific thinking in the next twenty years. We could be in one right now but unable to see the forest for the trees.

No matter when the discovery is made, it takes the biggest brains we have years and decades to sort out and clarify the implications and applications. The first guys were figuring out quantum physics a hundred years ago.

----------


## desiresjab

What Yes/No is calling the two sides of a coin with consciousness and randomness, I am trying to roughly visualize more as a simple function expressing a relationship between consciousness and randomness with reality, F(c,r)=Z, where the dependent value Z is reality and dependent on two variables which act on each other.

In his version, one imagines there might be tunneling from one side to the other, entanglement, simultaneous existence in more than one place, and other details left for the imagination.

In my model you have to imagine two vertical asymptotes. The area between these asymptotes is reality. At least one point is traveling along a curve within this area. Its position left or right represents a ratio of randomness to consciousness determined by the type of phenomenon. One expects less order in a supernova explosion than in electrons quietly entangled in a lab experiment.

How close r or c may come to their asymptotes is unknown. The world we experience with our senses seems to highly favor the left asymptote toward randomness. Entanglemet and electron "choices" might be examples of consciousness.

If we make two points traveling in the area between the asymptotes of consciousness and randomness, we might account for entanglement and existence in two complementary states at once. A complementary set is, in fact, precisely the other side of the coin. We see the two "sets" magically interplaying at the same time as the two points move inside the boundaries between asymptotes they may never touch. For if one or the other touched an asymptote, the value of the other would have to be zero at that moment, representing pure consciousness or pure randomness. But these pure states can no more exist than pi can be fully expressed.

Anyway, a lot of things Yes/No and I are saying are similar. One difference is I do not claim to believe in as much. My only belief is that _there are more things under heaven and earth than your philosophy has ever dreamed of_, to give Shakespeare his due again, and that mathematics is of infinite complexity which cannot, however, be housed under one roof, whatever exactly that means. The real meaning and implications of Godel's theorems are still quite up for grabs by some genius with a great insight. Chaitin is working on such matters, as are others. Yes/No believes that consciousness pervades matter at all levels, while I only happily admit it is possible.

----------


## YesNo

> This all goes to the notion that we Humans are special... in fact all the evidence, from Copernicus' observations to Relativity, Quantum Theory and Cosmic Inflation suggests we are in fact Homo-Sapiens, that is we are just another beast on a planet that can support organic life orbiting an unexceptional star. Evolution has endowed us with a level of consciousness; in that, we are not alone as other beasts likewise have a level of _human-like_ consciousness... http://us.whales.org/blog/2012/08/we...er-species-are


I think I agree with all of that. Although we have opposite metaphysical positions regarding consciousness, I don't think we disagree when it comes to science and dualism. We both favor science and we both distrust dualism. For example, I have no problem with there being other species that have or even had human-like consciousness both on this planet or on other planets similar to ours, both in this universe or in universes similar to ours. We are not special. 

Where we disagree would be on the role of consciousness in all these universes. I think consciousness is fundamental. You, I assume, believe consciousness can be derived from unconscious matter. I don't think unconscious matter exists and base that upon the uncertainty found in quantum physics. That uncertainty can be interpreted as quantum reality making a choice when it is tested by an experimenter and hence demonstrating enough consciousness to choose. I know that people will say that is crazy talk, but I think that is the underlying motivation behind many worlds and superdeterminism which are ways to explain away the uncertainty so the existence of consciousness is not a viable interpretation.

Take your example of stimulating the brain and then turning awareness (consciousness) on and off. The dualist would believe that the brain contains unconscious matter and that consciousness is separate from the unconscious brain. The dualist may even believe that only we are conscious which further traps the dualist. From my view, that brain being manipulated is conscious at many levels. Stimulating it only changes the way a conscious reality manifests itself. It does not turn consciousness on and off.

For a materialist, the existence of consciousness needs to be explained. For an idealist, the existence of what looks like unconscious matter needs to be explained. Consciousness is a problem because it leads to theism. An atheistic explanation of consciousness would allow for some form of panpsychism with weak emergent properties, that is, consciousness would be explained by reducing it to the consciousness in quantum reality. This ties consciousness to matter without implying the existence of any transcendent consciousness. 

If one has transcendent consciousness, then one has theism. I will admit that I am also a panentheist. I think that panpsychism with emergent properties is not adequate to explain the universe considering that the universe had a beginning. It implies consciousness goes beyond the universe and hence is transcendent.

----------


## YesNo

> Yes/No believes that consciousness pervades matter at all levels, while I only happily admit it is possible.


Yes, that is how I view consciousness. Matter is one way consciousness manifests itself.

----------


## desiresjab

> Yes, that is how I view consciousness. Matter is one way consciousness manifests itself.


It is a question philosophy frames but cannot answer. Only science and math are good at answering the questions philosophers can pose but have no chance of answering. When it comes to real answers we can trust, they will work because they are repeatable in experimental form and formualtable in mathematical structures.

If I let myself stray too far from this rock, the next thing you know I am standing knee deep in philosophy. After a conjecture must come the hunt for evidence, not more extrapolation on the conjecture, or that turns into the interminable arguing of philosophers instead of science. It all comes down to getting enough evidence into a working model. That is all we can do, that is all we know, and that is about it.

The idea is immensely appealing, as many ideas are, and great for philosophical musings, a hippie paradise of Maharishi cosmic consciousness, a very loose idea that sprung to public awareness in mid 20th century. We all like it, but _belief_ is a strong word. How do you say you can believe it? Isn't it, rather, what you would like to believe and lean towards?

----------


## YesNo

> It is a question philosophy frames but cannot answer. Only science and math are good at answering the questions philosophers can pose but have no chance of answering. When it comes to real answers we can trust, they will work because they are repeatable in experimental form and formualtable in mathematical structures.


I was reading parts of Leonard Susskind and Art Friedman's "Quantum Mechanics The Theoretical Minimum What you need to know to start doing physics" last night. It is not easy to get repeatable answers especially if you want to know more than one thing at a time like the position and the momentum of a particle or the spin of a particle along two different axes. 

They write that the statement, "The particle has position x *and* the particle has momentum p", is "completely meaningless (not even wrong)". (page 21)

But that is what I like about positivism. Physicists with that perspective take evidence seriously even if it is not repeatable.




> If I let myself stray too far from this rock, the next thing you know I am standing knee deep in philosophy. After a conjecture must come the hunt for evidence, not more extrapolation on the conjecture, or that turns into the interminable arguing of philosophers instead of science. It all comes down to getting enough evidence into a working model. That is all we can do, that is all we know, and that is about it.


Although I tend to agree that one needs to keep evidence close at hand, none of us can stop believing (aka conjecturing, aka philosophizing). We are all knee deep in philosophy and some of us have waded into the deep end. Which I suppose means that we have lost the solid ground of evidence under our feet and have only reason to rely on.




> The idea is immensely appealing, as many ideas are, and great for philosophical musings, a hippie paradise of Maharishi cosmic consciousness, a very loose idea that sprung to public awareness in mid 20th century. We all like it, but _belief_ is a strong word. How do you say you can believe it? Isn't it, rather, what you would like to believe and lean towards?


The problem is everyone believes something. A deeper problem: some of us don't think we believe the facts we know are true.

I was thinking about the mathematical model E=mc^2. 

From an engineer's perspective that model is useful and convenient in getting nuclear power plants to work. 

From a philosophical perspective, (perhaps out in the deep end, but who knows?), this relationship between energy and mass suggests there may be a similarity and a non-dualistic (aka monistic) relationship between consciousness and matter. It doesn't help determine which side wins out, energy or mass, but at least the concepts of "energy" and "mass" are better defined than "consciousness" and "matter".

----------


## desiresjab

> I was reading parts of Leonard Susskind and Art Friedman's "Quantum Mechanics The Theoretical Minimum What you need to know to start doing physics" last night. It is not easy to get repeatable answers especially if you want to know more than one thing at a time like the position and the momentum of a particle or the spin of a particle along two different axes. 
> 
> They write that the statement, "The particle has position x *and* the particle has momentum p", is "completely meaningless (not even wrong)". (page 21)
> 
> But that is what I like about positivism. Physicists with that perspective take evidence seriously even if it is not repeatable.
> 
> 
> 
> Although I tend to agree that one needs to keep evidence close at hand, none of us can stop believing (aka conjecturing, aka philosophizing). We are all knee deep in philosophy and some of us have waded into the deep end. Which I suppose means that we have lost the solid ground of evidence under our feet and have only reason to rely on.
> ...


Science keeps upping the ante. We used to study the temperature at which water boils and the rate at which an object falls near earth's surface. Look what we have done with quantum applications though we barely know our way around. We traveled to the moon without knowing what 90% of the stuff we were traveling through was. Applications and answers are farther apart in understanding than many people realize.

Paradigm shifts may have a somewhat constant period, for as our tools increase exponentially so does the difficulty of the questions we tackle. I say twenty years but a hundred and twenty or more would not be surprising if this paragraph is true.

----------


## Dreamwoven

I liked this End of Days item: http://www.rt.com/news/315849-myster...ta-rica-cloud/

----------


## YesNo

> I liked this End of Days item: http://www.rt.com/news/315849-myster...ta-rica-cloud/


There is a blood moon coming up as well in about a week. I saw the last one. I hope I remember to watch this one before I get zapped by the end of time.

I understand that if the Higgs field collapsed we would all be dematerialized at the speed of light. It wouldn't be easy to give us any warning. I don't think it would hurt.

----------


## YesNo

> Science keeps upping the ante. We used to study the temperature at which water boils and the rate at which an object falls near earth's surface. Look what we have done with quantum applications though we barely know our way around. *We traveled to the moon* without knowing what 90% of the stuff we were traveling through was. Applications and answers are farther apart in understanding than many people realize.
> 
> Paradigm shifts may have a somewhat constant period, for as our tools increase exponentially so does the difficulty of the questions we tackle. I say twenty years but a hundred and twenty or more would not be surprising if this paragraph is true.


Are you referring to the Apollo US space missions in the early 1970's? Some people think they were unmanned.

----------


## desiresjab

> Are you referring to the Apollo US space missions in the early 1970's? Some people think they were unmanned.


Some people also think OJ was innocent and the world is six thousand years old.

The laughable evidence for the moon flight being a hoax was the flag that looked like it was blowing in a moon breeze, until these idiots learned it was made that way so that it would not droop straight down. Like anyone, these fools need real evidence to make their cases compelling. Not having it, however, has never stopped them, for they have Faith.

----------


## YesNo

You're old enough to know about OJ? I can still remember: "If the glove don't fit you must acquit." 

In the movie Interstellar there is a scene where the main character is informed by school authorities that the Apollo missions were staged for political purposes. When the idea hits the movies it has become mainstream. We'll find out in a couple of decades when the classified documents go unclassified. It wouldn't surprise me if they were faked, but I can still remember when I first learned that people considered this a possibility. It took me two days to get used to the idea. 

Even Newton believed the world was about 6000 years old. That mathematical model I think it is safe to say isn't true anymore.

----------


## Dreamwoven

It is easy to forget how threatened the US president felt when it looked very much as though the USSR was way ahead, with the first sputnik, first dog in space and first man in space, Gagarin. This was really big stuff. I remember seeing Gagarin at one of the London exhibitions shortly after this.. First to the moon became an American obsession and was a campaign promise from Kennedy, by the end of the 1960s. When did it happen? July 1969.

----------


## YesNo

> What Yes/No is calling the two sides of a coin with consciousness and randomness, I am trying to roughly visualize more as a simple function expressing a relationship between consciousness and randomness with reality, F(c,r)=Z, where the dependent value Z is reality and dependent on two variables which act on each other.


Dualism hopes that reality is, and not only behaves like, some F(c,r)=Z machine. One might be able to build a mathematical model that provides correct predictions about the behavior of reality (such as Ptolemy's epicycles) without the model correctly describing what reality is (the earth "really" goes around the sun). 

I've been an idealist for only a few years. Before that I would have thought people like George Berkeley were incredibly unrealistic. Today I still have a hard time with my intuition but I'm working on it, like any true believer, some might say, with a new theory to internalize. For example, I still like to see the world around me divided into those things able to act based on their choices (conscious agents or the "c" in your function) and those other things, the non-agents (the "r" in your function): unconscious matter out of which we design and build stuff like tables and sidewalks. But I'm working on it.

----------


## YesNo

> It is easy to forget how threatened the US president felt when it looked very much as though the USSR was way ahead, with the first sputnik, first dog in space and first man in space, Gagarin. This was really big stuff. I remember seeing Gagarin at one of the London exhibitions shortly after this.. First to the moon became an American obsession and was a campaign promise from Kennedy, by the end of the 1960s. When did it happen? July 1969.


I vaguely remember Gagarin. Some say that mission was faked as well, but I know little about it. I remember being in elementary school and the teacher had us sitting around a TV watching the liftoff of some (probably Apollo) space mission. I had to go to the bathroom and missed seeing the actual event and was given a lecture for not being in the room.

----------


## desiresjab

> I vaguely remember Gagarin. Some say that mission was faked as well, but I know little about it. I remember being in elementary school and the teacher had us sitting around a TV watching the liftoff of some (probably Apollo) space mission. I had to go to the bathroom and missed seeing the actual event and was given a lecture for not being in the room.


I remember standing outside with my dad and seeing Gagarin's point of light move across the sky.

Not that I would put it past ourselves or the Russians to attempt the hoax for political reasons, I just don't believe it happened, based on logic. Too many thousands would have to be involved in the coverup for too long for it to succeed. And not that I doubt the expertise of our intelligence agencies and propaganda arms at twisting exposure's wrist until it says uncle and actually turns into a benefit by discrediting the whistleblowers themselves, I just think our technological advances are real, and the 1969 moon landing was not an impractical task for 1969 technology, so our truth-bending machines were not needed to create false space flights.

There is a lot of outlandish conspiracy stuff that has gone or will go mainstream, to answer something from a prior post. Someone told me I can go on google maps and see physical space stations of aliens sitting right there on the moon. I haven't tried it, but I believe them, because they were laughing at the idea and indignant at the same time. Anyone can write goofy blogs or post doctored and edited videos that sound and look like actual in-flight, official video records of the event. That's why our guys or their guys could have done it in 1969 if they had to, because regular folks with made up theories are even capable of it these days.

----------


## YesNo

Just to play devil's advocate, if the only claim is that the missions were unmanned, then you could have seen Gagarin's point of light and people would have seen the liftoffs (unless they were in the bathroom) of the Apollo rockets. I can image something that could serve as a retroreflector could have been deposited and moon stuff picked up without a human being actually handling them. The photos we are shown of fancy retroflectors as evidence we really, really went to the moon could have been taken on Earth.

But I don't care either way. I don't think there are aliens on the Moon's far side. Maybe on Ceres? (Just kidding.  :Smile:  ) I would like to know what those bright spots are.

----------


## desiresjab

> Just to play devil's advocate, if the only claim is that the missions were unmanned, then you could have seen Gagarin's point of light and people would have seen the liftoffs (unless they were in the bathroom) of the Apollo rockets. I can image something that could serve as a retroreflector could have been deposited and moon stuff picked up without a human being actually handling them. The photos we are shown of fancy retroflectors as evidence we really, really went to the moon could have been taken on Earth.
> 
> But I don't care either way. I don't think there are aliens on the Moon's far side. Maybe on Ceres? (Just kidding.  ) I would like to know what those bright spots are.


Just to respond to devil's advocacy.

It just isn't that much harder to do the manned flight than what you suggest. In fact, robot technology to gather moon samples may have been slightly beyond 1969 technology. Might as well have a man aboard. It's easier.

Also. I assume that Sally Ride and all the other supposedly dead astronauts who died in falsely manned flights, are enjoying pensions under false names in something like the witness protection program scattered across the U.S.A.

We don't know what an alien is. It is hard to completely disbelieve in something which is scientifically quite feasible in every way. _There are more things under heaven and earth, Yes/No, than your philosophy has ever dreamed of_.

Taking one step to the left, I would be able to assume that life as we know it is a tiny strip of life's actual possibilities and manifestations. To me, any form consciousness would be life. If it is conscious, it is alive, whether it has a body or not. Not only is anything possible, anything will be, if you take Shakespeare the way I do on this point.

You want miracles? I would have to consider it a miracle if there were no other life in the universe. I say universe because we are sure that exists, whereas we don't know yet (or maybe forever) if the multiverse does. I think infinite enfoldment of emergent properties means as much to me as it can, and needs only a single universe to be so anyway. Some of it is mere definition. If we discovered new universes, they would simply be parts of a new and larger single universe with us. The new universe might include a host of new schools of physics instead of the single one we believe in presently. Maybe I should make that _the two we presently believe in_. The two we believe will presently be one. Look at all that belief.

Infinite enfoldment of emergent properties outstrips the most vigorous imagination in its real output. Sensitivity to initial conditions produces infinite variety. Infinite variety of life is the possibility.

To say I believe it, would be to go too far, in fact mad. There is just something awfully curious about our being here at all, in the first place, that gives one pause to adopt almost anything at moments of deep reflection. There may as well be forms of consciousness we have no chance of ever communicating with, and others we can but barely, and others more like us somewhere in the vast reaches. There may as well be. Shakespeare says there are. There is room for everything to be true.

If there is something like infinite enfoldment of emergent properties operating in the universe. Consciousness may turn out to be an emergent property from quantum mechanics that leaks into our scale. It may turn out that consciousness is irrepressible and breaks out many places on any scale, likes leaks in the Dutch boy's dike. Consciousness might have potential at any ordinality of scale or potency, from approaching infinitesimal to approaching infinity.

Given this, even our amazing film and science fiction industries would be helpless to compete with the possibiliies for life forms that actually exist or have potential to.

It is nice to dream and speculate. That is part of the reason I love literature as much as science & math. The human mind feeds on more than abstract rumination. Knowing the difference between belief and feeling is an important discrimination for the philosopher, though it will not necessarily make a difference to the effectiveness of the poet or even the mathematician. What I believe and what I feel share territory but are not one. Damn, I am not mad enough.

----------


## YesNo

> Just to respond to devil's advocacy.
> 
> It just isn't that much harder to do the manned flight than what you suggest. In fact, robot technology to gather moon samples may have been slightly beyond 1969 technology. Might as well have a man aboard. It's easier.
> 
> Also. I assume that Sally Ride and all the other supposedly dead astronauts who died in falsely manned flights, are enjoying pensions under false names in something like the witness protection program scattered across the U.S.A.


The skepticism that I think has some validity is only directed toward the manned missions during the space race. It does not include what Sally Ride did. 




> We don't know what an alien is. It is hard to completely disbelieve in something which is scientifically quite feasible in every way. _There are more things under heaven and earth, Yes/No, than your philosophy has ever dreamed of_.
> 
> Taking one step to the left, I would be able to assume that life as we know it is a tiny strip of life's actual possibilities and manifestations. To me, any form consciousness would be life. If it is conscious, it is alive, whether it has a body or not. Not only is anything possible, anything will be, if you take Shakespeare the way I do on this point.


I do not think consciousness needs a body either. Also I assume there are other forms of consciousness, perhaps some monitoring us right now, out there. I don't think the Ceres bright spots are an example of them, but perhaps we will find out later this year.




> You want miracles? I would have to consider it a miracle if there were no other life in the universe. I say universe because we are sure that exists, whereas we don't know yet (or maybe forever) if the multiverse does. I think infinite enfoldment of emergent properties means as much to me as it can, and needs only a single universe to be so anyway. Some of it is mere definition. If we discovered new universes, they would simply be parts of a new and larger single universe with us. The new universe might include a host of new schools of physics instead of the single one we believe in presently. Maybe I should make that _the two we presently believe in_. The two we believe will presently be one. Look at all that belief.


Actually, I don't want miracles. A miracle assumes there is an unconscious material substance under deterministic laws. These laws are violated by the miracle implying the existence of something superior to the unconscious matter. That is dualism. I don't believe in unconscious matter nor in determinism. So I don't need miracles.

The problem with emergent properties is that it implies reductionism. That would be my objection to Thomas Nagel's panpsychism. This is not to say that reductionist theories are not useful as models, but reductionism limits our possibilities to what can be found at the quantum level. Then one adds on a mechanism by which a new form can emerge from simpler forms. That mechanism, because it is a "mechanism", I find suspect. 




> Infinite enfoldment of emergent properties outstrips the most vigorous imagination in its real output. Sensitivity to initial conditions produces infinite variety. Infinite variety of life is the possibility.


I don't understand this, but it may not matter.




> To say I believe it, would be to go too far, in fact mad. There is just something awfully curious about our being here at all, in the first place, that gives one pause to adopt almost anything at moments of deep reflection. There may as well be forms of consciousness we have no chance of ever communicating with, and others we can but barely, and others more like us somewhere in the vast reaches. There may as well be. Shakespeare says there are. There is room for everything to be true.


I agree with this especially about there being something "awfully curious about our being here at all". We take our existences far, far too much for granted.




> If there is something like infinite enfoldment of emergent properties operating in the universe. Consciousness may turn out to be an emergent property from quantum mechanics that leaks into our scale. It may turn out that consciousness is irrepressible and breaks out many places on any scale, likes leaks in the Dutch boy's dike. Consciousness might have potential at any ordinality of scale or potency, from approaching infinitesimal to approaching infinity.


I like the idea about consciousness leaking through the dike of unconscious matter. I don't believe there is a dike there at all, but it does make one sense that there is a lot more hidden than we realize.




> Given this, even our amazing film and science fiction industries would be helpless to compete with the possibiliies for life forms that actually exist or have potential to.
> 
> It is nice to dream and speculate. That is part of the reason I love literature as much as science & math. The human mind feeds on more than abstract rumination. Knowing the difference between belief and feeling is an important discrimination for the philosopher, though it will not necessarily make a difference to the effectiveness of the poet or even the mathematician. What I believe and what I feel share territory but are not one. Damn, I am not mad enough.


Neither am I mad enough. Maybe some day.

----------


## desiresjab

> The skepticism that I think has some validity is only directed toward the manned missions during the space race. It does not include what Sally Ride did.


Some ride, some don't.




> I do not think consciousness needs a body either. Also I assume there are other forms of consciousness, perhaps some monitoring us right now, out there. I don't think the Ceres bright spots are an example of them, but perhaps we will find out later this year.


Since we may not be able to recognize them, they can also be very near. 





> Actually, I don't want miracles. A miracle assumes there is an unconscious material substance under deterministic laws. These laws are violated by the miracle implying the existence of something superior to the unconscious matter. That is dualism. I don't believe in unconscious matter nor in determinism. So I don't need miracles.


I am reminded that earlier in the discussion I used the term _dualism_ when the term I was grasping for was _pluralism_. Definitely different concepts under the assignments of philosophy. My bad.

I am afraid I was using the word _miracle_ more poetically and for its hyperbolic value (I will probably think of a more appropriate term here, too). The crossover zone of the two disciplines always gets me in trouble in the philosophy room. 





> The problem with emergent properties is that it implies reductionism. That would be my objection to Thomas Nagel's panpsychism. This is not to say that reductionist theories are not useful as models, but reductionism limits our possibilities to what can be found at the quantum level. Then one adds on a mechanism by which a new form can emerge from simpler forms. That mechanism, because it is a "mechanism", I find suspect.


This takes a little more thought. I don't know if I get it...."but reductionism limits our possibilities to what can be found at the quantum level." How so? 





> I like the idea about consciousness leaking through the dike of unconscious matter. I don't believe there is a dike there at all, but it does make one sense that there is a lot more hidden than we realize.


I believe dykes exist. I don't know about these figurative dikes, though. Is poetry or a poor cousin _ unfolding_ in the discussion out of necessity? Metaphor and formula are strange bedfellows.

----------


## YesNo

> Since we may not be able to recognize them, they can also be very near.


Perhaps just like "many worlds", according to some interpretations of quantum mechanics. Who knows where they are? Conveniently, they can't be seen. This allows the skeptics on all sides a way out. They can always take a positivist perspective and deny meaning (or even "existence") to concepts that can't be repeatedly observed.

It is interesting that quantum experiments are not repeatable except on average when we switch between measuring observables whose operators don't commute. Ask an electron it's position and it makes a choice. Ask an electron its momentum and it makes another choice. Then ask it for its position again, just to make sure you have it right, and it changes its fickle "mind", but only within a range of possible choices. At least that is how I see it at the moment, and my own fickle mind might change tomorrow.




> I am reminded that earlier in the discussion I used the term _dualism_ when the term I was grasping for was _pluralism_. Definitely different concepts under the assignments of philosophy. My bad.


Generally, people believe in some sort of dualism or pluralism. Some things look like agents because you have to deal with their choices directly. Other things look like non-agents and can be manipulated without worrying about what they think about it.




> I am afraid I was using the word _miracle_ more poetically and for its hyperbolic value (I will probably think of a more appropriate term here, too). The crossover zone of the two disciplines always gets me in trouble in the philosophy room.


What I think people are looking for in a "miracle" is a sign of "love", not so much a violation of unconscious deterministic laws. The world would be very chaotic, perhaps even love-less, if those deterministic laws were too erratic.




> This takes a little more thought. I don't know if I get it...."but reductionism limits our possibilities to what can be found at the quantum level." How so?


Reductionism assumes that we only need information at the smallest level and we can show how everything else follows or evolves from that. That low level could be quantum stuff, or an atom (to move further up the reductionist chain), or a gene. They are the building blocks of bigger stuff. if reductionism is all there is, then everything we observe in the bigger stuff has to be in the smaller stuff. Hence panpsychism assumes consciousness is in the smaller stuff. One still needs some "operators" to get the simpler stuff to the more complicated forms and that is where the emergent belief comes in.

An alternate view would be that the smaller stuff is derived from more complicated stuff which like the aliens may not be visible to us.

If there is only reductionism, then all the possibilities have to be in the smaller stuff. That is where the limitation comes from. Of course there are the emergent operators, but how far do they extend the possibilities of this small stuff, if they are there at all?




> I believe dykes exist. I don't know about these figurative dikes, though. Is poetry or a poor cousin _ unfolding_ in the discussion out of necessity? Metaphor and formula are strange bedfellows.


I look at science as a form of literature.

----------


## desiresjab

There will always be another operator missing from wherever we stand. Isn't that the essence of Godel? The operator is XXX...but what operates that? There is nothing new in this game. That is why philosophy can never make progress. I am not saying philosophy could do anything else, mind you. I do not expect philosophy to become a fantasy exercise. But the transitory and purely speculative nature of its results make it a game played for fun rather than truth, and to me it is precisely a fantasy exercise with stylized constraints. I accept those stylized constraints, logic premier among them. None of us chooses a framework resembling Witgenstein's for our discussion, though. A more freewheeling approach is superior, methinks. In fact, a discovery like Godel's instructs us to reinvent philosophy. I am gald that is not my job.

----------


## desiresjab

> In fact, a discovery like Godel's instructs us to reinvent philosophy. I am gald that is not my job.


Forum chours:

AND SO ARE WE.

----------


## YesNo

> None of us chooses a framework resembling Witgenstein's for our discussion, though. A more freewheeling approach is superior, methinks.


What was Wittgenstein's framework? I found him mostly unintelligible.

----------


## desiresjab

> What was Wittgenstein's framework? I found him mostly unintelligible.


Hardcore logical analysis. He is not known as a cosmologist, but a mathematics logician. I might have been unfair to him. I just wanted someone who was difficult and a logician old enough to have once desired the perfect logical framework for all mathematics envisioned by Hilbert, which was apparently destroyed once and for all by Godel. I say apparently because the world of mathematics or philosophy is not through interpreting precisely what Godel implies, and perhaps never will be.

----------


## YesNo

Do you mean Whitehead and Russell? They wrote the _Principia Mathematica_. Wittgenstein wrote the _Tractatus Logico-philosophicus_.

----------


## desiresjab

> Do you mean Whitehead and Russell? They wrote the _Principia Mathematica_. Wittgenstein wrote the _Tractatus Logico-philosophicus_.


Whitehead and Russel were of the same ilk as Wittgenstein. Russel was Wittgenstein's teacher. The Principia and Tractatus had the same goal.

----------


## YesNo

What was that goal?

This is from the Tractatus (preface) where Wittgenstein claims the book's "whole meaning could be summed up somewhat as follows: What can be said at all can be said clearly; and whereof one cannot speak thereof one must be silent." http://www.gutenberg.org/files/5740/...2bc22a5186441d

----------


## desiresjab

> What was that goal?
> 
> This is from the Tractatus (preface) where Wittgenstein claims the book's "whole meaning could be summed up somewhat as follows: What can be said at all can be said clearly; and whereof one cannot speak thereof one must be silent." http://www.gutenberg.org/files/5740/...2bc22a5186441d


That goal was to place all of mathematics on unassailable footing by correspondence with formal logic. Wittgenstein was born early enough to have partaken in the frenzy of optimism in his early period until Godel sent everyone home.

Cosmology always gets back to the question of First Cause, one of the oldest arguments in formal philosophy. One can make pretty words and pretty arguments. This is what philosophers have done.

People great and small have lain awake pondering the question. We can make such statemnts as Gauss did that _All creation would be a waste without immortality_. It may even be a leakage of higher metalogic into our consciousness that allows one to share this intuition with Gauss in reflective moments. I have difficulty getting past a first cause with no motive in it. I see a lot of motive packed in nature (via infinite enfolding) at all levels. Perhaps it also exists at the quantum level. 

In the actual Game Of Life, from a few basic rules, staggering complexity evolved. You have to be smart to know a few basic rules will do this. Or maybe a meta-consciousness is merely letting its experiment run to see what happens, like the inventor of the game was doing. Slightly different rules lead to vastly different unfoldings in the game.

Everything coming about for nothing, is the hard part to accept. Something has to push the start button. But something had to create that something. It appears there can be no first cause. To our minds this can only mean either a type of circularity or everything existing at once without cause. 

I feel everything could not come about for nothing, but our particular existence, our kind, even our minerals, _did_ come about for nothing, without plan, without particulars. In the game of life you do not knock a piece to another postion with your finger, you simply let the game run and the complexity unfold. Any creator would be too proud to interfere, so forget divine intervention. The game runs, that's all, its simple rules managing infinitely enfolded emergent properties blindly, i.e. without intervention.

These opinions are more philosophy than cosmology, I know. I am not sure they are quite beliefs. But they are close.

----------


## YesNo

> That goal was to place all of mathematics on unassailable footing by correspondence with formal logic. Wittgenstein was born early enough to have partaken in the frenzy of optimism in his early period until Godel sent everyone home.


I think Wittgenstein is looking more at an ideal human language rather than mathematics. Mathematics starts with assumptions. They are the closest thing mathematics has to empirical data, but they do not come from sense experience. Then reason takes over. A human language is talking about "facts" and not just assumptions which are statements.

I'm trying to read his essay to see if it makes more sense today. It made no sense to me 15 years ago.




> I have difficulty getting past a first cause with no motive in it. I see a lot of motive packed in nature (via infinite enfolding) at all levels. Perhaps it also exists at the quantum level.


I see no way to have "motive" without "consciousness". What evidence is there that "infinite enfolding" exists? 




> In the actual Game Of Life, from a few basic rules, staggering complexity evolved. You have to be smart to know a few basic rules will do this. Or maybe a meta-consciousness is merely letting its experiment run to see what happens, like the inventor of the game was doing. Slightly different rules lead to vastly different unfoldings in the game.


This sounds too mechanistic for me.




> In the game of life you do not knock a piece to another postion with your finger, you simply let the game run and the complexity unfold. Any creator would be too proud to interfere, so forget divine intervention. The game runs, that's all, its simple rules managing infinitely enfolded emergent properties blindly, i.e. without intervention.


Those pieces you might move imply a belief that unconscious matter really exists. 




> These opinions are more philosophy than cosmology, I know. I am not sure they are quite beliefs. But they are close.


I would say they are beliefs and on no firmer foundation than belief in a deity. What makes the foundation firm? That is where philosophy becomes valuable clarifying what is at stake so one isn't basing one's life upon assumptions one has not examined.

----------


## desiresjab

> I think Wittgenstein is looking more at an ideal human language rather than mathematics. Mathematics starts with assumptions. They are the closest thing mathematics has to empirical data, but they do not come from sense experience. Then reason takes over. A human language is talking about "facts" and not just assumptions which are statements.


Wittgenstein was not talking about a normal everyday language to write poetry in or flirt with a girl. This is the same unassailable language Russel and Whitehead and Peano were after. It is a language of logical propositions. Wittgenstein is not teaching people how to write clearer prose. His is the language of pure mathematics in the clothing of unassailable logic. That is the ideal human language he was striving toward.

Look at it this way, Wittgenstein was not some kind of rebel, he was a principle player in logical positivism. He was not primarily at odds with the other players, but held their view of this "perfect language." 

_There are more things under heaven and earth, Horatio, than your philosophy has ever dreamed of_. 

The above is not a verifiable statement, and therefore is not cognitively meaningful and is a mere psuedostatement. I think you need to discard any notion that Wittgenstein and the others were after a normal language to accomplish their purposes. A quite limited form of English, for instance, would have carried the load. This restricted language might have been useful in the courtroom or the classroom, but not when shooting the breeze with your neighbor or demonstrating how much you love your wife.

Their axiomatic skeleton of language would never have produced great poetry. It never did, and it never would have. And the fact is, they could not make it work for their other purpose, either. Shakespeare succeeded, Wittgenstein and his allies failed in their interesting experiment. Only Godel succeeded, and he did this by proving their efforts were doomed to ultimate failure.

Leibniz, over two hundred years earlier, had dreamed of the same perfect language.

----------


## YesNo

I agree that Whitehead, Russell and Wittgenstein were trying to find a language that represented everything objectively. Perhaps an unconscious machine could be made to use it.

However, the subject matters of their languages were different. Whitehead and Russell started with propositions (axioms or assumptions) as the subject matter and from these initial statements derived other statements. They assumed their language was both consistent and complete. Godel showed it was not complete, assuming it was consistent. There were statements that could be formed in their language that could not be derived from it. They could not know everything about their subject matter they might think they should be able to know.

I don't think Godel addressed Wittgenstein's language whose subject matter was the world. His subject matter was not a set of propositions like Whitehead and Russell had as their subject matter, but "facts" about the world. He tried to reduce the "world" to "facts" so that his language could manipulate them.

To show they were talking about a different subject matter consider 2.0211: "If the world had no substance, then whether a proposition had sense would depend on whether another proposition was true." Whitehead and Russell were starting with initial propositions (axioms, not "facts") that had no "substance". 

Quantum physics might have done to Wittgenstein's language what Godel did to Whitehead and Russell's Principia Mathematica. Wittgenstein cannot know all the "facts" about the world that he could formulate. He cannot know, for example, both the position and momentum of a quantum particle at any given time, but they could both be represented as facts. This seems to falsify 1.11: "The world is determined by the facts, and by their being _all_ the facts."

----------


## desiresjab

> I agree that Whitehead, Russell and Wittgenstein were trying to find a language that represented everything objectively. Perhaps an unconscious machine could be made to use it.
> 
> However, the subject matters of their languages were different. Whitehead and Russell started with propositions (axioms or assumptions) as the subject matter and from these initial statements derived other statements. They assumed their language was both consistent and complete. Godel showed it was not complete, assuming it was consistent. There were statements that could be formed in their language that could not be derived from it. They could not know everything about their subject matter they might think they should be able to know.
> 
> I don't think Godel addressed Wittgenstein's language whose subject matter was the world. His subject matter was not a set of propositions like Whitehead and Russell had as their subject matter, but "facts" about the world. He tried to reduce the "world" to "facts" so that his language could manipulate them.
> 
> To show they were talking about a different subject matter consider 2.0211: "If the world had no substance, then whether a proposition had sense would depend on whether another proposition was true." Whitehead and Russell were starting with initial propositions (axioms, not "facts") that had no "substance". 
> 
> Quantum physics might have done to Wittgenstein's language what Godel did to Whitehead and Russell's Principia Mathematica. Wittgenstein cannot know all the "facts" about the world that he could formulate. He cannot know, for example, both the position and momentum of a quantum particle at any given time, but they could both be represented as facts. This seems to falsify 1.11: "The world is determined by the facts, and by their being _all_ the facts."


I think it was an attack on the same fort from a different angle. A true proposition was a fact to Russel and Whitehead. Wittgenstein tried to redefine fact. He cannot get away from propositiions, though. 

"If the world had no substance, then whether a proposition had sense would depend on whether another proposition was true."

That is an if/then proposition. Are we to believe it?

That whole school which quibbled over language and propositions was a batch of necessary reasoning in our historical unfolding, then the world moved on the way it moved on from Kant. Not three people a year read Principia Mathematica. Fifty might finish a work by Wittgenstein, but probably not that many. After Godel, only an end run was possible. I think that is all Witty was up to. But I am not about to delve into a year of him so I can say so for sure. I believe I know roughly what he was up to, though there were certain differences you point out, because it is what they have all been up to (philosophers) since the Greeks. They wanted to talk the universe plain. The universe is not plain, though. Shakespeare knew that before Wittgenstein was born.

----------


## YesNo

I don't think Wittgenstein is worth reading either, but his Tractatus keeps coming up like Joyce's Finnegan's Wake and so I pick it up to remind myself why I put it down.

----------


## desiresjab

> I don't think Wittgenstein is worth reading either, but his Tractatus keeps coming up like Joyce's Finnegan's Wake and so I pick it up to remind myself why I put it down.


Russel and Whitehead's language might have had a chance at becoming a real tool, if it had only been what they had hoped. But Wittgenstein's language sounds like an abstract ideal rather than something that could actually come about and be useful. It would have been more intractable than legal documents, if it did come about. I am not sure who it could be useful to besides lawyers. Not science? Probably not, but maybe. Not sure what his purpose for it was.

----------


## Dreamwoven

I tend to agree with desiresjab on this.

----------


## desiresjab

Something would be amiss if I were not continually changing my views and adjusting the dial--I would be dead. But I live because I feel pity tinged with scorn for those with too much belief. An excess of belief always means _this brain is closed for the business of vertical thought._.

Too bad. I am old enough to have seen a few fine men fall into the maws of Jesus or Muhammad, only to emerge without their brains.

Family values and the golden rule _et al_--those are beliefs to keep forever. The search for ultimate truth is different. Shed views anytime the skin no longer fits.

For millennia we have had it rammed down our throats that to not believe is to suffer eternal death.

I might say, then,that _belief_ is central to all religions because a clue is being given. 

However, I have no such belief. I am merely looking things over. Aristotle said that the mark of an educated mind is the abilty to entertain a thought without accepting it. I want all my guests to be fully entertained at least once.

Why is the big entity reduced to clues? Believers scramble for an answer here. Why clues? I can hear the cries of free will now.

Maybe that keeps me free, it seems to have enslaved the big entity, though, who is unable to make a clear point without trickery and double talk.

----------


## desiresjab

What could such a clue even mean, entertaining for a moment the thought that all religion springs from truth? Let's see. Belief would have to be connected to some kind of quantum readiness state of our consciousness to ascend the fish ladder after death, for the clue to make any physical sense to my hillbilly mind.

Here is the mockery. The prized belief is in a God apparently worthy of no one's belief, an entity that tortures its children and makes bad excuses.

Someone tipped us off. Now I get it. God had to be sly, didn't he? For there is another entity, isn't there now? That is why God is as evil as good. The devil restrained his hand, even in his own Good Books.

So God is evil, then, but maybe slightly more positive than negative--for the clue did reach us after all...ahem!..

Even mathematics would be easier without ol' Beelzebub lurking inside God. God is eev-yil. God said so. We are in his image. The bad father loves us, though, and battles the eev-yil one who is but himself for our passage into the afterworld.

Why, sure.

----------


## YesNo

> ...some kind of quantum readiness state of our consciousness...


What is a "quantum readiness status of our consciousness" but another belief system?

----------


## desiresjab

> What is a "quantum readiness status of our consciousness" but another belief system?


The connection between consciousness and quantum mechanics, if any, is nebulous. Now the psychology of quantum mechanics is completely theoretical, but at least reasonable. "Belief" as an important factor was speculative. A relatively clear conscience might be what is required, not belief at all. That might be the readiness state, since we know nil about this theoretical psychology.

----------


## desiresjab

If the universe came from nothingness, then nothingness is at least that which is capable of producing existence out of itself. It would appear even nothingness has potential, then. Nothingness+potential, but is that really nothingness? How about nothing without any potential whatsoever? Now that's nothingness.

Pure nothingness seems no more possible than pure randomness. Nothingness cannot expell its potentiality anymore than matter can expell the necessity of two following one. Is potential even more fundamental than the laws of math, or is it the other way around? If potential can precede existence, then maybe the laws of math can too? Maybe the laws of math exist as potential before existence itself, or maybe they precede and channel potential. The number of things existing before existence could start to pile up.

It is an interesting sidebar to note that Russel and Whitehead required 360 pages of Principia Mathematica to prove that two follows one. The proof was successful.

----------


## YesNo

Russell and Whitehead started with some assumptions, perhaps set-theoretic ones. They didn't start from nothing.

There are some problems I have with "nothing". 

First, it seems to be a metaphor for a void in space, but the nothing that preceded our universe was not in space. Not only was there nothing, but there was no space either. That means there were no fields where a void could occur. 

Second, focusing on nothing assumes that the universe happened in some reductionist way. That is, the only thing one had to work with was nothing. 

Third, did consciousness exist during this period of nothingness? If it did then that would be the non-reductionist "something" out of which the universe could have arisen from "nothing".

----------


## desiresjab

> Russell and Whitehead started with some assumptions, perhaps set-theoretic ones. They didn't start from nothing.
> 
> There are some problems I have with "nothing". 
> 
> First, it seems to be a metaphor for a void in space, but the nothing that preceded our universe was not in space. Not only was there nothing, but there was no space either. That means there were no fields where a void could occur. 
> 
> Second, focusing on nothing assumes that the universe happened in some reductionist way. That is, the only thing one had to work with was nothing. 
> 
> Third, did consciousness exist during this period of nothingness?  If it did then that would be the non-reductionist "something" out of which the universe could have arisen from "nothing".


I mention the proof of two following one because it has been a big part of the discussions here, not because I am trying to say something about the way the universe was formed out of nothing.

If anything existed--consciousness, potential, limits--it would qualify as something. One definite problem with nothingness is that we cannot quite get there in our imaginations. We can say the word itself, so we fall back on saying the word when imagination fails.

Existence is less problematical than nothingness, because we can self-verify the former. But nothingness is a concept only and fraught with paradox. Reductionist philosophy might require the universe to spring from nothing, my own definition of nothingness says that is impossible.

I could argue absolute nothingness cannot exist because it has not already occurred in infinite time. There is no imaginable way for nothingness to supplant existence, any more than there is for existence to supplant nothingness.

----------


## YesNo

> If anything existed--consciousness, potential, limits--it would qualify as something. One definite problem with nothingness is that we cannot quite get there in our imaginations. We can say the word itself, so we fall back on saying the word when imagination fails.
> 
> Existence is less problematical than nothingness, because we can self-verify the former. But nothingness is a concept only and fraught with paradox. Reductionist philosophy might require the universe to spring from nothing, my own definition of nothingness says that is impossible.
> 
> I could argue absolute nothingness cannot exist because it has not already occurred in infinite time. There is no imaginable way for nothingness to supplant existence, any more than there is for existence to supplant nothingness.


Perhaps philosophically defining what "something" is should be done first. This would be the basis for scientific hypotheses that experiments could attempt to falsify. I would think "something" requires a space in which it can exist. Given that it might change, it also requires time to perform the change. So if one doesn't have space or time, then we cannot have "something". Does consciousness or potentiality require space and time?

Part of the problem is with the metaphors we use. They imply a preferred solution to a problem which may be leading us in the wrong direction.

I was reading a popular survey of quantum physics some time ago describing the wave pattern with the darker interference areas that a series of single photons made on a detection panel after going through a double slit. The article noted that if one measured which slit each photon went through then the wave pattern was destroyed. To get the wave pattern the article said that each photon split and went through both slits so it could interfere with itself. Now how would anyone know that? If they actually looked, they would only see one photon and the wave pattern would be gone. The idea of the photon splitting and going through both slits is only a kind of metaphor. It is philosophic speculation that tries to explain the evidence. Underlying that particular metaphor is a belief that "something" is nonetheless there causing the interference pattern.

----------


## desiresjab

> Perhaps philosophically defining what "something" is should be done first. This would be the basis for scientific hypotheses that experiments could attempt to falsify. I would think "something" requires a space in which it can exist. Given that it might change, it also requires time to perform the change. So if one doesn't have space or time, then we cannot have "something". Does consciousness or potentiality require space and time?
> 
> Part of the problem is with the metaphors we use. They imply a preferred solution to a problem which may be leading us in the wrong direction.
> 
> I was reading a popular survey of quantum physics some time ago describing the wave pattern with the darker interference areas that a series of single photons made on a detection panel after going through a double slit. The article noted that if one measured which slit each photon went through then the wave pattern was destroyed. To get the wave pattern the article said that each photon split and went through both slits so it could interfere with itself. Now how would anyone know that? If they actually looked, they would only see one photon and the wave pattern would be gone. The idea of the photon splitting and going through both slits is only a kind of metaphor. It is philosophic speculation that tries to explain the evidence. Underlying that particular metaphor is a belief that "something" is nonetheless there causing the interference pattern.


I define nothingness as having no properties whatsover. It has no space, no time, no energy, no potential, and therefore could never produce anything.

An appropriate scientific definition might be a vacuum minus the cosmological constant for dark energy. That would effectively be a "vacuum within space." Vacuum is only along the road to nothingness.

We cannot well imagine nothingness, so maybe it does not really exist. Of course ancient peoples could not have well imagined curved space, no matter how they might have tried, and that did not stop it from existing.

Philosophy cannot answer scientific or mathematical questions. Whatever reality is, it is more strange than we think. Philosophically, Buddah got it right with the _all is maya_ idea.

We are in the greatest scientific and mathematics explosion of all time. Collating it all is a gigantic problem. This why progress is traditionally slow in these fields. It takes the power of the greatest brains years or decades to decide what a new discovery even means in many cases, and to try and understand some of the implications.

Computers and instant communications sped this process up exponentially. The process will experience another exponential burst sooner rather than later, if we can hold on as a species. Cyborgs and quantum computers will lead us in directions undreamed of. You could feel confident betting on either one to begin showing up within twenty years.

I have felt lucky to have been born in an age relatively free of superstition where I can witness incredilbe discoveries. For freedom and quality of life I do not know if subsequent generations can match us, but discovery-wise the future is going to be the greatest show ever on earth, making our own progress seem quite pale and slow. Machines will be able to make the leap from theory to application exponentially faster by sorting through all discoveries. Many discoveries to make other discoveries may be lying in front of us now, unsorted, unrecognized for what they are and can do.

I predict the work of Ramanujan will play a further part yet. Some of his formulae are as magical as incantations.

----------


## YesNo

> I define nothingness as having no properties whatsover. It has no space, no time, no energy, no potential, and therefore could never produce anything.


That is how I see it also. Our universe could not have originated from nothing. However, I don't see why consciousness requires space, time, energy or potential. It could be the only eternal reality.




> An appropriate scientific definition might be a vacuum minus the cosmological constant for dark energy. That would effectively be a "vacuum within space." Vacuum is only along the road to nothingness.


Unfortunately this concept of "nothing" assumes the presence of space. The word "vacuum" is not an adequate substitute for "nothing".




> We cannot well imagine nothingness, so maybe it does not really exist. Of course ancient peoples could not have well imagined curved space, no matter how they might have tried, and that did not stop it from existing.
> 
> Philosophy cannot answer scientific or mathematical questions. Whatever reality is, it is more strange than we think. Philosophically, Buddah got it right with the _all is maya_ idea.


What we think of as "something" may well all be maya. What we are doing right now is philosophy.

----------


## desiresjab

First Principle is unknown. From First Principle evolved many beings. Some of them were gods to us. Heirarchies of gods in meta realities that barely intersect came into being. The gods who created us are as ingonrant as we of First Principle. We were a tabletop exercise for them. They suspect they may be the same.

If we share physics with no other beings, what could we share with their meta reality? Only the highest grade of information, perhaps. Along the nodes of the Riemann zeta function in complex numbers in multi-dimensional space, perhaps they have multi-dimensional message boxes. Fanciful.

Many famous problems were solved last century, but no paradigm shifts have come yet. Usually a physical theory causes the shift, for which the mathematics is found to already exist. An exception was calculus, which provided its own impetus for a couple of centuries.

----------


## YesNo

Some paradigm shifts in cosmology from the last century are the existences of galaxies other than our own and the big bang.

----------


## desiresjab

> Some paradigm shifts in cosmology from the last century are the existences of galaxies other than our own and the big bang.


Yes. Edwin Hubble ranks with anyone for impact. His was the kind of discovery that others were brainy enough to have made, but he was there at the right time with the right instrument. Other discoveries like the invention calculus or relativity require a super genius and nothing less.

----------


## YesNo

I picked up Milton A. Rothman's "Discovering the Natural Laws" in a used bookstore in Door County, Wisconsin, recently. He has two points he wants to make. First, he wants to show the empirical evidence for the laws of physics in order to justify those laws which is the main reason I'm reading the book. Second, he wants to discredit his own consciousness. He doesn't put the part about consciousness in those terms, but that's how I read it. 

Anyway, he writes this about how the laws of physics have not changed (and I admit, for practical purposes, it is a useful assumption we might as well make):

_"Thus we know, by direct evidence, that the laws which operate here and now are the same laws that existed early in the history of the universe. They have not changed in all the time that has elapsed since very soon after the beginning." (page 209)_
Because of that beginning, we can't say "we know" the laws have not changed, or rather, that the universe is actually following the laws we think it is.

----------


## desiresjab

> I picked up Milton A. Rothman's "Discovering the Natural Laws" in a used bookstore in Door County, Wisconsin, recently. He has two points he wants to make. First, he wants to show the empirical evidence for the laws of physics in order to justify those laws which is the main reason I'm reading the book. Second, he wants to discredit his own consciousness. He doesn't put the part about consciousness in those terms, but that's how I read it. 
> 
> Anyway, he writes this about how the laws of physics have not changed (and I admit, for practical purposes, it is a useful assumption we might as well make):
> 
> _"Thus we know, by direct evidence, that the laws which operate here and now are the same laws that existed early in the history of the universe. They have not changed in all the time that has elapsed since very soon after the beginning." (page 209)_
> Because of that beginning, we can't say "we know" the laws have not changed, or rather, that the universe is actually following the laws we think it is.


Whether or not the laws have ever changed is something I have to leave to the professional physicists, for no one else can ever settle it. If the laws have changed once they can probably change again.

One interesting notion is that each black hole contains a separate universe with all the features of our own--stars, planets, galaxies and black holes. Each of these black holes also contains another universe, and so on ad infinitum... Pairs of virtual particles can be caught exactly at the border of the black hole. One particle goes in, the other stays out, which is a readable phenomenon because it is happening _en masse_ at the border. I am intrigued but not sure what it means or implies.

Something that I do understand is the notion that we are a simulation instead of an actual life form. This only requires one assumption, and maybe not even that--that all things are happening at once, past, present and future, but we are merely unable to experience them that way. The reason I say _maybe not even that_ is because the numbers also suggest that it is likely the past which is the present to us must have already completed itself.

Either way, our descendants have powerful ancestor simulation software and the capability to run their simulations trillions upon trillions of times. With numbers like that and only one "real" reality, it is almost statistically impossible for us to be the original real beings, but rather a sophisticated simulation of them.

These similations will be (are) so familiar among our descendants, they will probably become a common school project at some point. The equivalent of a school girl to her own society may have created our universe. This would explain its imperfections better than any junk about free will _et al_. A _Universal Simulator_ would perhaps be the size of a cell phone, and surely would allow some parameters to be adjusted by the student or scientist, or it would not be interesting enough.

----------


## YesNo

> Whether or not the laws have ever changed is something I have to leave to the professional physicists, for no one else can ever settle it. If the laws have changed once they can probably change again.


I don't leave anything to them. They have to convince me.




> One interesting notion is that each black hole contains a separate universe with all the features of our own--stars, planets, galaxies and black holes. Each of these black holes also contains another universe, and so on ad infinitum... Pairs of virtual particles can be caught exactly at the border of the black hole. One particle goes in, the other stays out, which is a readable phenomenon because it is happening _en masse_ at the border. I am intrigued but not sure what it means or implies.


Is there any empirical evidence for that? Could you post a link?




> Something that I do understand is the notion that we are a simulation instead of an actual life form. This only requires one assumption, and maybe not even that--that all things are happening at once, past, present and future, but we are merely unable to experience them that way. The reason I say _maybe not even that_ is because the numbers also suggest that it is likely the past which is the present to us must have already completed itself.
> 
> Either way, our descendants have powerful ancestor simulation software and the capability to run their simulations trillions upon trillions of times. With numbers like that and only one "real" reality, it is almost statistically impossible for us to be the original real beings, but rather a sophisticated simulation of them.
> 
> These similations will be (are) so familiar among our descendants, they will probably become a common school project at some point. The equivalent of a school girl to her own society may have created our universe. This would explain its imperfections better than any junk about free will _et al_. A _Universal Simulator_ would perhaps be the size of a cell phone, and surely would allow some parameters to be adjusted by the student or scientist, or it would not be interesting enough.


Any metaphors we come up with that we think could be true should be questioned. What is the metaphor telling us about ourselves? Why do we think this metaphor is credible in the 21st century?

----------


## Dreamwoven

Awesome stuff, though makes my head spin. *Withdraws from discussion*

----------


## desiresjab

> I don't leave anything to them. They have to convince me.
> 
> 
> 
> Is there any empirical evidence for that? Could you post a link?
> 
> 
> 
> Any metaphors we come up with that we think could be true should be questioned. What is the metaphor telling us about ourselves? Why do we think this metaphor is credible in the 21st century?


These fellows come up with an interesting idea then spend the rest of their careers traveling around lecturing on one idea.

If I experience myself, if I become sentient, that means there is already a vastly greater chance I am a simulation rather than the real organic phenomenon. This, of course, was also true of the first originals, where applying the notion would have yeided an incorrect conclusion. If someone cannot admit the logical weight of a proposition, I begin to suspect their objectivity. Recall: _The mark of an educated mind is the ability to entertain an idea without accepting it_. I do not accept our simulated existence as a hardcore belief either. I accept simulation as a logical connundrum which forces itself upon us and is not easily, if at all, explained away. 

Now what does it really take to entertain this idea of simulation rather than just repeating the words of the proposition? One immediately flys to probability. Unless there are many other universes, the values for some of our constants stand out glaringly, not as impossible, but as highly suspect. 

You probably noticed that the supposition of an "original organism" or universe, is not necessary. The "makers" could come from a reality where space and time simply do not exist at all. Space and time could be invented concepts of the makers, who then set about creating imperfect realities where those things could exist as more than concepts.

A human should be able to locate within themselves an objection or two to any proposition. I do not present this idea as a truth, but as a "_hey, look here_" moment. Do I fail to accept this idea of our simulation as a truth because I can find a logical foothold against it, or simply because I do not like the idea itself. People hate ideas when they rub against the grain of what they already want to believe. I do not even know what it is I want to believe, but there could be something lurking back there I am not aware of.

<I don't leave anything to them. They have to convince me.>

That is leaving it to them, leaving it to them to convince you, which is no more than they had to do in the first place.

Along with yourself, I will continue to question about anything anyone throws up for consideration. So far, however, I do not see anyone dispatching the idea of simulation with viable refutations based on logic. 

Anyone, send the idea away squalling like an injured dog, if you can, and we will move on.

----------


## YesNo

> If I experience myself, if I become sentient, that means there is already a vastly greater chance I am a simulation rather than the real organic phenomenon. This, of course, was also true of the first originals, where applying the notion would have yeided an incorrect conclusion. If someone cannot admit the logical weight of a proposition, I begin to suspect their objectivity. Recall: _The mark of an educated mind is the ability to entertain an idea without accepting it_. I do not accept our simulated existence as a hardcore belief either. I accept simulation as a logical connundrum which forces itself upon us and is not easily, if at all, explained away. 
> 
> Now what does it really take to entertain this idea of simulation rather than just repeating the words of the proposition? One immediately flys to probability. Unless there are many other universes, the values for some of our constants stand out glaringly, not as impossible, but as highly suspect. 
> 
> You probably noticed that the supposition of an "original organism" or universe, is not necessary. The "makers" could come from a reality where space and time simply do not exist at all. Space and time could be invented concepts of the makers, who then set about creating imperfect realities where those things could exist as more than concepts.
> 
> A human should be able to locate within themselves an objection or two to any proposition. I do not present this idea as a truth, but as a "_hey, look here_" moment. Do I fail to accept this idea of our simulation as a truth because I can find a logical foothold against it, or simply because I do not like the idea itself. People hate ideas when they rub against the grain of what they already want to believe. I do not even know what it is I want to believe, but there could be something lurking back there I am not aware of.


One could say that any delusion is a simulation, perhaps a self-simulation, hiding what is really going on. One doesn't need computers to do this. Nor does it have to get technologically fancy. Lies are simulations of reality with the intent to delude others.

If the "makers" are outside space and time, you might as well call them "gods".

I agree that the "constants" suggest that there must have been something conscious picking out the right values. I don't think these physical "constants" are actually constant, but that is a side-issue.

If you are willing to entertain the simulation idea, then one possible simulation is that the universe is only 6000 years old and the simulation just makes it look older to delude us.

----------


## desiresjab

> One could say that any delusion is a simulation, perhaps a self-simulation, hiding what is really going on. One doesn't need computers to do this. Nor does it have to get technologically fancy. Lies are simulations of reality with the intent to delude others.
> 
> If the "makers" are outside space and time, you might as well call them "gods".
> 
> I agree that the "constants" suggest that there must have been something conscious picking out the right values. I don't think these physical "constants" are actually constant, but that is a side-issue.
> 
> If you are willing to entertain the simulation idea, then one possible simulation is that the universe is only 6000 years old and the simulation just makes it look older to delude us.


I am glad you picked up on that last idea. It would make the ultra conservative Christians right by accident. Biblical chronology wins by accident! Can such a deceptive god really be benevolent? If the bible is right about chronology, it must be wrong about god. Would a benevolent god lead us on such a goose chase as science, which its "truth" would make of science?

Oh, wait. The Jew god told us knowledge was all bull, didn't he? He told us the only thing worth doing was polishing his nards, didn't he? That's why you can't like this god, or ever agree with him.

Most Logical Summary: we ourselves or the entire universe were made by less than perfect gods. There is no reason for a perfect being to create children he cannot protect fully from the evil one he knows we are no match for on average. If he were perfect he could protect us and he would.

And of course, what forking reason was so important he had to create us in the first place, not only make us but make us spiritually helpless? Because he had to test us? Manufacturers test light bulbs that way--large scale statistics. A manufacturer only needs to do this if defects are possible.

Because he had to give us free will? Beings made right do not need free will. Make them right in the first place, instead of blaming their defects on the consumer. 

He needs us pure before we join him? Make us that way, you are perfect.

It all points in one direction. The maker of this universe was one powerful sombitzen, but not all powerful. The maker of this universe, if any, may indeed have an enemy nearly as powerful as itself. It would not need us for its war, if it were all powerful. If there is a god, I would say he has implemented a military draft. Again, he would not need to do that if he were perfect.

But--Jesus crawlers scream--god needs to whip the devil once and for all, and the devil would not have been able to damage other perfect beings. Of course that is no more than an admission god is using us as bait to draw the evil one out of his hidey-hole so he can be slain. Does that seem sufficiently all powerful to you to be perfect, or vice versa?

The all powerful could have no worthy enemy on any front. Any other stance is blocked by the definition.

Conclusion: The god of all religions is an infinite, powerful liar, or a finite, powerful liar.

----------


## YesNo

> I am glad you picked up on that last idea. It would make the ultra conservative Christians right by accident. Biblical chronology wins by accident! Can such a deceptive god really be benevolent? If the bible is right about chronology, it must be wrong about god. Would a benevolent god lead us on such a goose chase as science, which its "truth" would make of science?


It would make them right by "design", not "accident". Don't forget that by claiming the universe is a computer simulation, you are also saying that science is a goose chase.

I'm a general panentheist, but I'm not a member of any specific religion. I don't think "design" is a good way to describe what a theist's God does to create and maintain the universe. That would be more appropriate to a deist's view of god. The problem with "design" is that it is too mechanistic. It is too deterministic. If anything, considering that the universe had a beginning and considering the discovery of quantum indeterminacy, we need to look at more organic metaphors for the universe. It makes more sense to see the universe as an organism than as a machine some deist's god designed and then wound up and set in motion.




> Oh, wait. The Jew god told us knowledge was all bull, didn't he? He told us the only thing worth doing was polishing his nards, didn't he? That's why you can't like this god, or ever agree with him.


What does it mean to polish someone's nards? I did look the phrase up in the search engine, but nothing came up.

When you say "Jew god", is this a rhetorical appeal to antisemitism? I don't think the dominant Christian culture is as antisemitic as it used to be. If you could get the Muslims involved somehow, that might still have a rhetorical appeal.




> Most Logical Summary: we ourselves or the entire universe were made by less than perfect gods. There is no reason for a perfect being to create children he cannot protect fully from the evil one he knows we are no match for on average. If he were perfect he could protect us and he would.
> 
> And of course, what forking reason was so important he had to create us in the first place, not only make us but make us spiritually helpless? Because he had to test us? Manufacturers test light bulbs that way--large scale statistics. A manufacturer only needs to do this if defects are possible.
> 
> Because he had to give us free will? Beings made right do not need free will. Make them right in the first place, instead of blaming their defects on the consumer. 
> 
> He needs us pure before we join him? Make us that way, you are perfect.


I read this recently in Joseph Campbell and Bill Moyer's "The Power of Myth" (page 3--I didn't get very far): "...the only way you can describe a human being truly is by describing his imperfections. The perfect human being is uninteresting..."

My suggestion is to throw away the mechanistic or manufacturing metaphors and think more in terms of biology and organisms.




> It all points in one direction. The maker of this universe was one powerful sombitzen, but not all powerful. The maker of this universe, if any, may indeed have an enemy nearly as powerful as itself. It would not need us for its war, if it were all powerful. If there is a god, I would say he has implemented a military draft. Again, he would not need to do that if he were perfect.


That's an interesting idea. What's a "sombitzen"? I did try to search for the word.




> But--Jesus crawlers scream--god needs to whip the devil once and for all, and the devil would not have been able to damage other perfect beings. Of course that is no more than an admission god is using us as bait to draw the evil one out of his hidey-hole so he can be slain. Does that seem sufficiently all powerful to you to be perfect, or vice versa?


Another interesting idea for a story.




> The all powerful could have no worthy enemy on any front. Any other stance is blocked by the definition.
> 
> Conclusion: The god of all religions is an infinite, powerful liar, or a finite, powerful liar.


Let's get back to simulations. 

People use simulations to test theories since direct observations are not always possible. But just because one can create a simulation that does not mean that reality is a computer simulation. That would be the same as saying that because we can make plastic flowers that flowers are made of plastic. Furthermore the consciousness within the universe cannot be generated from an algorithm that a human can follow. That is how I read John Searle's Chinese Room Argument against artificial intelligence.

Ultimately, I think the confusion comes down to thinking the model is reality. The model is just a way to help us understand and exploit some part of what we experience.

----------


## desiresjab

> It would make them right by "design", not "accident". Don't forget that by claiming the universe is a computer simulation, you are also saying that science is a goose chase.
> 
> I'm a general panentheist, but I'm not a member of any specific religion. I don't think "design" is a good way to describe what a theist's God does to create and maintain the universe. That would be more appropriate to a deist's view of god. The problem with "design" is that it is too mechanistic. It is too deterministic. If anything, considering that the universe had a beginning and considering the discovery of quantum indeterminacy, we need to look at more organic metaphors for the universe. It makes more sense to see the universe as an organism than as a machine some deist's god designed and then wound up and set in motion.
> 
> 
> 
> What does it mean to polish someone's nards? I did look the phrase up in the search engine, but nothing came up.
> 
> When you say "Jew god", is this a rhetorical appeal to antisemitism? I don't think the dominant Christian culture is as antisemitic as it used to be. If you could get the Muslims involved somehow, that might still have a rhetorical appeal.
> ...


In the end we have to use words to describe our thoughts. I don't see much difference between a mechanistic universe and an organic universe--that is mere window dressing to me. I guess people do that when they want to imply they think the universe evolves. All that really does is invoke the notion of change into the equation. What is infinite unfolding of emergent properties, if not evolution? Enfoldment means the potential for evolution to me, not the unpacking of particular properties on schedule. I do not consider a piece of metal slowly rusting as an example of evolution, though a change of state does occur. Which way evolution goes depends on which properties are already unfolded. Many lines of evolution did not unfold, which enabled us to be what we are. Those properties had only the quality of abstract potential. They were not "already there."

I hardly believe the universe is as simple as the 17th century watch paradigm. These days we prefer a more organic outlook. Yet there is also the information outlook--the universe is nothing but a big computer. It is interesting that our historical models for the universe always employ the most cutting edge technologies of their eras. Newton invented calculus, which was all about documenting change, yet the model of the universe which emerged out of his era was based on the regularity of a clock, the most sophisticated article of technology of his time, requiring far more moving parts than a telescope.

The heirs to mechanism today are those who postulate the universe is a computer or a hologram. Do we dare to guess what is next?

----------


## desiresjab

How people can hold hardcore beliefs based on so little, mystifies me. Some are in my own family. Perhaps a clue lies in the fact that anti-knowledge brainwashing is found throughout the Bible. How fantastic if increasing knowledge had only been the object of navigation in religions. But we got religions which are traditionally the declared enemies of _earthly knowledge_. In place of investigative knowledge, religion gave us faith and the belief there is a higher state of understanding they call _enlightenment, salvation, wisdom,_ or some other manufactured name for a _state of grace_.

Those who turn their backs on blind faith for physics are on the holier path. Until such time that chanting or praying formulae by groups of scientists proves more effective than programming them into computers, faith mongers don't have a thing I believe I should pursue. They are on an ancient, worn out path that has never accomplished a thing that I can see and which, I believe, has thoroughly demonstrated its worthlessness in solving any problems whatsoever. Its cosmology is primitive and unsuitable for modern minds; its ethical systems are out of date, and kept up to date only by discarding large portions of the original and highlighting those parts that still may be relevant.

When churches and mosques are converted to museums instead of being demolished like giant Buddahs, we will be on the next track of our journey, unafraid to acknowledge our racial past and open to suggestions from ourselves.

If evidence turns up supporting any of the childish notions of religion, we can always go back. Even better reasons than religion for behaving ourselves are there for the taking.

One hundred years from now the world would be distinctly less religious, if a contemporary were transported forward.

Only more reasons for abandoning traditional religions will turn up in the evidence as we proceed.

----------


## YesNo

> How people can hold hardcore beliefs based on so little, mystifies me. Some are in my own family. Perhaps a clue lies in the fact that anti-knowledge brainwashing is found throughout the Bible. How fantastic if increasing knowledge had only been the object of navigation in religions. But we got religions which are traditionally the declared enemies of _earthly knowledge_. In place of investigative knowledge, religion gave us faith and the belief there is a higher state of understanding they call _enlightenment, salvation, wisdom,_ or some other manufactured name for a _state of grace_.


I've noticed that it's always _other people's_ beliefs that are "hardcore" and "brainwashing". 




> Those who turn their backs on blind faith for physics are on the holier path. Until such time that chanting or praying formulae by groups of scientists proves more effective than programming them into computers, faith mongers don't have a thing I believe I should pursue. They are on an ancient, worn out path that has never accomplished a thing that I can see and which, I believe, has thoroughly demonstrated its worthlessness in solving any problems whatsoever. Its cosmology is primitive and unsuitable for modern minds; its ethical systems are out of date, and kept up to date only by discarding large portions of the original and highlighting those parts that still may be relevant.


I think prayer and chanting make the mind healthier. Have you ever seen the movie "Anger Management"?




> When churches and mosques are converted to museums instead of being demolished like giant Buddahs, we will be on the next track of our journey, unafraid to acknowledge our racial past and open to suggestions from ourselves.


What I am hoping to see is atheists give up on anti-civil-libertarian, social construction fantasies involving the end of theism.




> If evidence turns up supporting any of the childish notions of religion, we can always go back. Even better reasons than religion for behaving ourselves are there for the taking.


There is evidence already: (1) The universe had a beginning. (2) Quantum indeterminancy throws out the machine metaphors. (3) Psi phenomena have empirical evidence justifying its existence. (4) We actually _are_ conscious: take a deep breath.




> One hundred years from now the world would be distinctly less religious, if a contemporary were transported forward.


I think that has been an atheistic dream for centuries. So far it hasn't happened.




> Only more reasons for abandoning traditional religions will turn up in the evidence as we proceed.


What "evidence" are you referring to? Perhaps the belief in that elusive evidence ever showing up is an example of hardcore brainwashing.

----------


## desiresjab

> I've noticed that it's always _other people's_ beliefs that are "hardcore" and "brainwashing".


Freedom of religion is necessary for that reason. 




> I think prayer and chanting make the mind healthier. Have you ever seen the movie "Anger Management"?


I have not seen anger management. I have no doubt there are marginal benefits from the activities you name. That is because the people employing them are investing effort. When humans invest effort they get results, whether its is from praying or Norman Vincent Peale. I suspect that any person diligently embarking on a Peale-inspired program of self improvement and relaxation would get the same positive results without resorting to the hocus pocus of religion or "eastern enlightenment."

Your normal MO is to now state that Norman Vincent Peale is only a belief system, too. 




> What I am hoping to see is atheists give up on anti-civil-libertarian, social construction fantasies involving the end of theism.


I have no idea what you are talking about with _anti-civil-libertation_... with atheists. That's all right. Don't try to educate me on that one. Since I am an agnostic, I do not cotton to proselytizing atheists. Should I enjoy proselytizing christians or moslems any more?




> There is evidence already: (1) The universe had a beginning. (2) Quantum indeterminancy throws out the machine metaphors. (3) Psi phenomena have empirical evidence justifying its existence. (4) We actually _are_ conscious: take a deep breath.


There is no evidence for religion in any of that, only to those who insist on interpretations fitting their predisposed agendas. I believe there is not a single instance of a psychic phenomenon in the history of mankind that stands up to the scrutiny of logic and scientific inspection. Name it if you have it. If you say the resurrectiuon of Jesus, we must be done here. There is only a theory that the universe had a beginning, my friend. This is evidently not the universe at all, but a tiny instance of it. There is another theory, and there goes god again.

I have nothing against more complex and relevant models of reality employing higher P/NP standards. Even machines can exceed our standards, though. There can be non-polynomial machines, perhaps. Would not a quantum computer itself be non-polynomial in its operations? At least it seems so. We do not know if any questions are truly hard, though. That is exactly what P/NP is trying to determine. One can have intuitive opinions, however. Mine is that some questions are truly hard. 




> I think that has been an atheistic dream for centuries. So far it hasn't happened.


This format is not allowing me to look back to see what you are replying to. I am quite unhappy with certain mechanics of this forum, such as its insistence on multiple log-ins just to make a post.

At any rate, religion is slowly disappearing. Donations to the collection plates have been falling steadily for more than a century. More significantly, the influence of the church in peoples' daily lives has declined almost out of existence. Almost no one allows church doctrine to decide their actions on any issue today, except as a last resort fallback or political position. A few hundred years ago the church ruled the daily lives of people in every detail. And no fair poll of our antecedents would have shown a mere 63% of them to be believers.

Of our 63% of professed believers, almost none of them do anything about it, or act in accordance with church doctrine when the going gets opportunistic. 99.9%+ of the 63% are lip servers only. Believe me, this is way down form medieval times. Please do not ask me how I know Russian peasants and fishwives were more sincere in their religious beliefs than Wall Street executives or rapping shoe clerks who might respond as believers.

Our own country started out Puritan, but it is far from those roots now. It has not broken them yet, because they are useful tools for political unity in the minds of politicians.




> What "evidence" are you referring to? Perhaps the belief in that elusive evidence ever showing up is an example of hardcore brainwashing.


How do I know, please, what future evidence might look like? No one has ever proved very good at predictiing the future.

----------


## YesNo

> Your normal MO is to now state that Norman Vincent Peale is only a belief system, too.


I have no problem with belief systems.




> I have no idea what you are talking about with _anti-civil-libertation_... with atheists. That's all right. Don't try to educate me on that one. Since I am an agnostic, I do not cotton to proselytizing atheists. Should I enjoy proselytizing christians or moslems any more?


I am referring to things like the Khmer Rouge, Maoism and Naziism.




> I believe there is not a single instance of a psychic phenomenon in the history of mankind that stands up to the scrutiny of logic and scientific inspection. Name it if you have it.


Check out Dean Radin's "Supernormal" or "Entangled Minds".




> If you say the resurrectiuon of Jesus, we must be done here.


What strikes me about those resurrection accounts are the similarities to modern shared death experiences.




> There is only a theory that the universe had a beginning, my friend. This is evidently not the universe at all, but a tiny instance of it. There is another theory, and there goes god again.


The only way atheists can remove God is if they can explain away _our consciousness_ by reducing it to something non-conscious. That is why I said to take a deep breath. The evidence of your own awareness contradicts your atheism. 




> I have nothing against more complex and relevant models of reality employing higher P/NP standards. Even machines can exceed our standards, though. There can be non-polynomial machines, perhaps. Would not a quantum computer itself be non-polynomial in its operations? At least it seems so. We do not know if any questions are truly hard, though. That is exactly what P/NP is trying to determine. One can have intuitive opinions, however. Mine is that some questions are truly hard.


These concepts relate only to the speed at which algorithms can find a solution. However, John Searle's critique doesn't depend on the speed of computation nor whether you use a quantum computer. Consciousness cannot be generated by these algorithms.




> At any rate, religion is slowly disappearing. Donations to the collection plates have been falling steadily for more than a century. More significantly, the influence of the church in peoples' daily lives has declined almost out of existence. Almost no one allows church doctrine to decide their actions on any issue today, except as a last resort fallback or political position. A few hundred years ago the church ruled the daily lives of people in every detail. And no fair poll of our antecedents would have shown a mere 63% of them to be believers.
> 
> Of our 63% of professed believers, almost none of them do anything about it, or act in accordance with church doctrine when the going gets opportunistic. 99.9%+ of the 63% are lip servers only. Believe me, this is way down form medieval times. Please do not ask me how I know Russian peasants and fishwives were more sincere in their religious beliefs than Wall Street executives or rapping shoe clerks who might respond as believers.
> 
> Our own country started out Puritan, but it is far from those roots now. It has not broken them yet, because they are useful tools for political unity in the minds of politicians.


Particular religions come and go, but the motivation for religion is rooted in our biology, not our culture. That is why atheistic (or "agnostic") efforts to legislate against, educate away or purge religious groups will ultimately fail. See Justin Barrett's "Born Believers: the science of children's religious belief" for a summary of the data supporting that position.




> How do I know, please, what future evidence might look like? No one has ever proved very good at predictiing the future.


I am looking for current evidence from science, not science fiction, not wishful thinking, that you rely on to support your views.

----------


## desiresjab

> I have no problem with belief systems.
> 
> 
> 
> I am referring to things like the Khmer Rouge, Maoism and Naziism.
> 
> 
> 
> Check out Dean Radin's "Supernormal" or "Entangled Minds".
> ...


You are the one who asked me about future evidence, my boy, like I can see into the distant future. I cannot name possible future evidence, so that makes you right on a whole host of things apparently.

You keep making statements, such as our own consciousnesss proves the existence of Gog. Tsk, tsk...I don't think so. Yes, in your mind it does exactly that, but not in anyone else's. A dog's consciousness must also prove the existence of Gog.

To me in these deep matters everything is an open question, but you have settled on a belief in Jesus which you now must find justification for in every facet of any new discovery. Everything must be reinterpreted in terms of Jesus being real and mystical. What you are getting at are your own preconceptions, but you interpret them to support your own dearly held beliefs about Jesus, not even seeing that they come from yourself and not a body of reliable evidence.

There is no good reason yet for me to state positively that the universe even had a beginning or did not, that there is a gog or there is not, or that Jesus is my saviour. Instead of multiverse, how about one big universe which we are either all of or not all of? No one can state for sure that what we call the universe is anything more than a local anomaly in a bigger picture.

If you want to get touchy about Jesus, that is up to you. But expect me to call it as I see it, friend. You are all touchy about Jesus, but the powers of Jesus are merely a theory, too. Truthfully, Jesus has not proven himself as a theory. But heck, maybe he will be back yet. I mean, the aether came back, in a way, with dark energy pervading all of space instead, so there is still hope for Jesus to come back under another interpretation, I guess.

Jesus, Jehovah, Allah, Brahama--these are concepts to let go of as they slip into the stream of history, not embrace and cherish. You can believe this: Jesus and the rest of them are on their way out for good. They will not be any more important to a space-faring race than Zeus is to us now. This is an easy extrapolation. That is the way it is in every science fiction novel, and those novels are the real prophecies of our civilization.

Biblical prophecies did a hellaciously poor job. Other than the return of the Jews to Israel--a biased, self-fulfillied phenomenon brought about by the powers of the west--they got nothing right. There is no Icarus in the bible presaging human flight. What a dead book for knowledge and understanding the bible is! The book of the dead. The bible gives a value to pi of 3. What an insight for an all powerful gog! God only missed it by a little. Believers claim the bible is the very word of god. Why didn't god say 3 1/7, then, since it is closer? Apparently God had not learned his fractions yet.

I have no problem with anyone believing in a very deep, vague God which is the underlying order of the universe as we know it, something like that. When they start getting specific with Jesus or Mohammad and little kid mythologies, then I consider them basket cases. _ Childhood's End_ by Arthur C. Clarke has it all in the title. Traditional religions and gods are not worthy of us. We are now better than or as good as the gods of our so called holy books. We have been told we are not worthy of god, but it is these gods who are not worthy of us.

You really want to be one with a tyrant like Jehovah, eh? Not me. My hope is for an afterlife with no gods at all.

----------


## desiresjab

I believe the influence of religion on mankind has been slightly positive or neutral, when summed up. Only a civilization devoid of religion based on both pure reasoning and humanitarian ideals, could have possibly done better.

Religion has been a weight for a while. Religion is almost down exclusively to its political usefulness. I feel we do not need religion anymore. It is holding us back by infecting too many people with nonsensical beliefs. All we need are the best humanitarian principles our race has come up with. Some of those are found in religions, and many are not.

This cannot be legislated effectively. You cannot force people not to believe something. We simply have to outgrow the delusions of our caveman infancy. The danger is in replacing religious authority with that of a police state to dictate right and wrong. Our best humanitarian ideals must always be the guide of our institutions. That is a very difficult state to maintain.

When our current religions are in believability to our descendants as Zeus is to us, I think we will have advanced sufficiently to show that we can be survivors without religion.

----------


## desiresjab

This is fascinating. Back to basics.

http://www.bookpump.com/bwp/pdf-b/9424134b.pdf

----------


## YesNo

I am also interested in cosmological models that do not rely on a big bang the way it is normally presented as Crawford does in the link you provided. 

These models sometimes call into question dark matter, dark energy and the constancy of physical "constants" such as the speed of light and gravitation. 

Here is another paper by Wun-Yi Shu http://arxiv.org/vc/arxiv/papers/1007/1007.1750v1.pdf. I have only read the introduction.

----------


## desiresjab

> I am also interested in cosmological models that do not rely on a big bang the way it is normally presented as Crawford does in the link you provided. 
> 
> These models sometimes call into question dark matter, dark energy and the constancy of physical "constants" such as the speed of light and gravitation. 
> 
> Here is another paper by Wun-Yi Shu http://arxiv.org/vc/arxiv/papers/1007/1007.1750v1.pdf. I have only read the introduction.


I lost another long post because of the idiotic setup of this forum. I am about done with this goat hole. It does not matter if I login first or not, it always tells me I do not have permission to post when I try to send my post, and I have to go through some other crap. Sometimes I have lost the post in the process. The people who run this outfit need to explain themselves.

Anyway, that was a great link. Right now I do not feel like trying to recreate my detailed post, so I will let it go for now.

----------


## Dreamwoven

desiresjab you should take this up with someone. You should not need to log in every time: I never "log in" here. Your post are clearly expressed and valuable. And in an interesting way this thread touches on a similar discussion in the astronomy thread.

----------


## desiresjab

> desiresjab you should take this up with someone. You should not need to log in every time: I never "log in" here. Your post are clearly expressed and valuable. And in an interesting way this thread touches on a similar discussion in the astronomy thread.


It doesn't seem like I used to have this irritating and sometimes destructive problem. Putting a lot of thought, time and effort into a post only to have it lost because of a consistent complication is unnerving. It could have with something to do with how long the post takes me to write. This one is fast. I will try iot and see if it goes through without the rigmarole.

----------


## desiresjab

> It doesn't seem like I used to have this irritating and sometimes destructive problem. Putting a lot of thought, time and effort into a post only to have it lost because of a consistent complication is unnerving. It could have with something to do with how long the post takes me to write. This one is fast. I will try iot and see if it goes through without the rigmarole.


That went through without complications. Something is timing me out, apparently, making it necessary for me to login again too quickly.

----------


## desiresjab

I can read some higher mathematics decently, kind of like some people might be able to read a Portugese newspaper but are unable to speak or understand it fluently at ground level where it is spoken. The gentleman in the link takes great pains to show that under his model we live in a spherical universe in 3-space with both radiation and dust.

Since I am not intimately familiar with the notation of this exact subject, it is like skipping over a word you don't know in the Portugese newspaper and filling it in from context. You never know what a bracket instead of regular parentheses might mean, for it has different meanings in different areas of mathematics, and of course what is in this paper is not pure math but mathematical physics, which has some notation conventions of its own.

The universe is expanding in Shu's model, but had no beginning and has no end, and the rate of expansion accelerates and de-accelerates by an unknown mechanism, vaguely described (to me) as curvature pressure and radiation pressure.

Time is not constant in this model but a conversion factor which varies with the evolution of the universe, so other things like Plank's constant are also not constants. It is dazzling and I can only grasp some elemnts of it. 

It appears even redshift is explained away as some kind of curvature pressure on photons. I am not sure if this means we are not really expanding but only appear to be doing so to ourselves, or if it actually matters to the model.

Powerful stuff, though. They are using the tools.

----------


## YesNo

I only vaguely understand these papers. What they provide are prompts for ideas and if the ideas lead back to the papers a closer reading is warranted.

What I find interesting is Shu's willingness to consider that the speed of light and Big G are not constants. What does that mean? It means that there has not been adequate empirical evidence collected to establish that these two "constants" should be considered constant. 

Shu's paper requires these to change. We now need empirical evidence to show that they actually do change. I think that data may be easy to come by. Rupert Sheldrake has been arguing that these are not constant for some time, especially Big G. Then we can ask if they are changing the way Shu predicts they should change. 

However, I don't think replacing c and big G with functions of time is adequate. They are already constant functions of time and their values are determined in an ad hoc data fitting manner.

----------


## desiresjab

> I only vaguely understand these papers. What they provide are prompts for ideas and if the ideas lead back to the papers a closer reading is warranted.
> 
> What I find interesting is Shu's willingness to consider that the speed of light and Big G are not constants. What does that mean? It means that there has not been adequate empirical evidence collected to establish that these two "constants" should be considered constant. 
> 
> Shu's paper requires these to change. We now need empirical evidence to show that they actually do change. I think that data may be easy to come by. Rupert Sheldrake has been arguing that these are not constant for some time, especially Big G. Then we can ask if they are changing the way Shu predicts they should change. 
> 
> However, I don't think replacing c and big G with functions of time is adequate. They are already constant functions of time and their values are determined in an ad hoc data fitting manner.


The only opinion I have is that guys like Shu live and work in a rarified atmosphere. Let's assume he is right. How many people on earth would be able to see his work and judge it fairly in minute detail? A mere handful. That is why these changes take so long--it still takes the few human beings who are capable to peer review the work. Then the dscovery has to be brought into play. This can take decades.

This awful lag will be no more once cyborgs come on the scene. They will be able to evaluate discoveries with their immense computing power, and make human like judgements on their merits. Next, they will help get these discoveries into play in days rather than years. Humans are great, but it takes them too long to evaluate their own work and put their discoveries in play. Like many other aspects of human life, this lag is going to change and shorten to almost nothing in the near future.

Everything depends on us not wiping ourselves out, though. The whole structure is standing, with the greatest researchers at the top. The structure is a house of cards. It has been brought down partially many times, and mankind had to start building its research structure again. We have had it up for six hundred years. When it comes down, the highest research stops for the most part.

----------


## YesNo

Probably the only thing a cyborg could do is validate is a mathematical proof. 

I think you are right that it takes a few decades for knowledge to reach people like us unless it is our specialty field, but it has to reach our minds, not just our computers (or cyborgs).

One way to improve the flow of knowledge is to facilitate our networks which the internet helps to do. There is a book by Nicholas A. Christokas called "Connected: the surprising power of our social networks and how they shape our lives". He talks of three degrees of influence. To use this thread as an example, what we write influences each of us as well as anyone who happens to read it. That's level one. Each of us is connected to others. They are influenced even though they haven't read the thread. That's level two. Each of them are connected to others and those others are influenced even though they have never heard of Lit Net. That's level three. The influence gets weaker as one progresses to each level. The only way I can see cyborgs helping is if they improve the social networks in our lives.

----------


## desiresjab

> Probably the only thing a cyborg could do is validate is a mathematical proof. 
> 
> I think you are right that it takes a few decades for knowledge to reach people like us unless it is our specialty field, but it has to reach our minds, not just our computers (or cyborgs).
> 
> One way to improve the flow of knowledge is to facilitate our networks which the internet helps to do. There is a book by Nicholas A. Christokas called "Connected: the surprising power of our social networks and how they shape our lives". He talks of three degrees of influence. To use this thread as an example, what we write influences each of us as well as anyone who happens to read it. That's level one. Each of us is connected to others. They are influenced even though they haven't read the thread. That's level two. Each of them are connected to others and those others are influenced even though they have never heard of Lit Net. That's level three. The influence gets weaker as one progresses to each level. The only way I can see cyborgs helping is if they improve the social networks in our lives.


I believe your first sentence is wrong, and I believe your second sentence is looking in the wrong direction. 

Here's what I mean. The cyborg with quantum computing abilities could not only validate proofs, but quickly use them to search for further proofs. This is only the tip of the iceberg. With a top-down approach, it could almost instantly relate any new results in math to every field in science where they applied. No one but other cyborgs would have any chance at all of keeping up with the progress. At such a pace, the human mind by itself would lag exponmentially further behind.

Your second sentence looks in the wrong direction because the knowledge will never reach minions like you and me, only the results of it will, if we manage to stay in control of our cyborgs.

The ramifications are scary, because I do not see how it is possible for plain old human beings to stay in control for very long of machines that can out think them by an exponentially increasing margin.

For comfort, one could always imagine these powerful entities will forever remain subservient to their complacent human masters who are pale and fat and so not-very-smart. We can imagine they also will write the magazine articles which keep us up to date on recent progress in science and mathematics, we who will be no more than observers in this process. We had better hope our cyborgs desire fat, useless witnesses to their activities.

----------


## YesNo

I agree that they could do more than validate proofs. Our computers currently provide us with similar aids. We do have to tell them what to do and when to start. However, I don't think they would have any more likelihood of taking over than the computers that provide climate control for our buildings would. 

So, I'm not scared by any of this. Why? Because they are not conscious and so cannot make a choice to dominate us or not. How do I know they are not conscious? Because they follow algorithms and so their unconsciousness follows from Searle's Chinese Room argument.

----------


## desiresjab

> I agree that they could do more than validate proofs. Our computers currently provide us with similar aids. We do have to tell them what to do and when to start. However, I don't think they would have any more likelihood of taking over than the computers that provide climate control for our buildings would. 
> 
> So, I'm not scared by any of this. Why? Because they are not conscious and so cannot make a choice to dominate us or not. How do I know they are not conscious? Because they follow algorithms and so their unconsciousness follows from Searle's Chinese Room argument.


True they are not conscious, but we can expect something in the not too distant future that acts like consciousness and can pass the Turing test. Once we lose our ability to distingusih their simulated consciousness from the real thing, then for all practical purposes, they are conscious, or at least we cannot prove otherwise.

Entities on the battlefield of mixed meat and silicon, with enhanced sensory and computing capabilities, will make humans obsolete fast on the battlefield. Similarly, they will make us obsolete everywhere in normal life, useless in war, research and technology, good for only politics and religion.

Do not overlook the key--that the new beings of meat and metal may not have true consciousness, but will have such sophisticated software that telling the difference will become impossible. Even their software is written by other cyborg specialists. They act just like humans and seem to have the same motivations, the can get inspired apparently. They do the same things we do--breed, eat, think--they are better than us. Why would they keep us around, and how would we actually control entities like this without becoming one ourselves?

The global elite have set it up that our future we will be as a chemical society of pill takers for every ailment, imagined and real. Right out of Orwell. How these interests will conflict or meld with the machine interests, is yet to be played out. But I imagine the cyborgs will need plenty of chemical assistance to keep their chimerical systems functioning properly. They will be writing their own software and developing their own medicine. What is hard to grasp is how fast the human world can transform. Once the exponentiation sets in, they will develop so fast we will have no control. They will be indispensable in every walk of life, and humans will be quite dispensable, in fact useless.

----------


## YesNo

> True they are not conscious, but we can expect something in the not too distant future that acts like consciousness and can pass the Turing test. Once we lose our ability to distingusih their simulated consciousness from the real thing, then for all practical purposes, they are conscious, or at least we cannot prove otherwise.


I thought there already existed computers that passed the Turing test. https://www.washingtonpost.com/news/...andmark-trial/

If they are not actually conscious they cannot make a choice to dominate us to not. However, humans, who are conscious, can make choices to use them in ways that may not be ethical.




> Entities on the battlefield of mixed meat and silicon, with enhanced sensory and computing capabilities, will make humans obsolete fast on the battlefield. Similarly, they will make us obsolete everywhere in normal life, useless in war, research and technology, good for only politics and religion.
> 
> Do not overlook the key--that the new beings of meat and metal may not have true consciousness, but will have such sophisticated software that telling the difference will become impossible. Even their software is written by other cyborg specialists. They act just like humans and seem to have the same motivations, the can get inspired apparently. They do the same things we do--breed, eat, think--they are better than us. Why would they keep us around, and how would we actually control entities like this without becoming one ourselves?
> 
> The global elite have set it up that our future we will be as a chemical society of pill takers for every ailment, imagined and real. Right out of Orwell. How these interests will conflict or meld with the machine interests, is yet to be played out. But I imagine the cyborgs will need plenty of chemical assistance to keep their chimerical systems functioning properly. They will be writing their own software and developing their own medicine. What is hard to grasp is how fast the human world can transform. Once the exponentiation sets in, they will develop so fast we will have no control. They will be indispensable in every walk of life, and humans will be quite dispensable, in fact useless.


Perhaps those of us who have smart phones today are already cyborgs.

----------


## desiresjab

> I thought there already existed computers that passed the Turing test. https://www.washingtonpost.com/news/...andmark-trial/


There you go. We can already not keep up with it. This was about a year ago.

Fooled into friendship with a computer would be something like being fooled into sex by a transvestite. The government could end up creating "friends" for us all, who counsel us and slyly steer us toward good citizenship and productivity.

----------


## YesNo

Being fooled by one's government is nothing new.

----------


## RogersSaaed

I can read some higher mathematics decently, kind of like some people might be able to read a Portugese newspaper but are unable to speak or understand it fluently at ground level where it is spoken. The gentleman in the link takes great pains to show that under his model we live in a spherical universe in 3-space with both radiation and dust.

----------


## RogersSaaed

I am just trying on ideas here.

----------


## YesNo

> I can read some higher mathematics decently, kind of like some people might be able to read a Portugese newspaper but are unable to speak or understand it fluently at ground level where it is spoken. The gentleman in the link takes great pains to show that under his model we live in a spherical universe in 3-space with both radiation and dust.


I agree with that. Two questions come to my mind whenever a cosmology is being presented: (1) What evidence exists for it? (2) What metaphysical assumptions are at stake?

For example, Hawking, if I understood it from David Berlinski's "The Devil's Delusion" (pages 100 and following), presented a view of the Big Bang where the beginning of the universe does not have a beginning but has linear time circling around itself in some complex number space. Is there empirical evidence for it? No, because it is conveniently hidden behind the cosmic microwave background. Why would he be presenting something like this? Well, he has to avoid the universe having a beginning otherwise he needs a cause for that beginning and that leads to theism of some sort.

A metaphysical materialist has very little to work with. Everything including consciousness has to come from unconscious matter. This is bad enough in a universe that can be assumed to be eternal, but when the universe is shown to be expanding implying a beginning in the past the challenge to materialism is severe.

I am assuming both Shu and Crawford have similar materialist assumptions they are trying to protect, but I may be wrong since I haven't studied them enough.

My main problem with the artificial intelligence position that desiresjab promotes is the attempt to reduce consciousness to unconscious algorithms. The Turing test depends on fooling people with the assumption that none of us are really conscious to begin with and we are all fooling each other. So the AI might as well be considered conscious since consciousness does not exist anyway. 

I think people like John Searle have shown that there is a difference between what we experience as a human being and what the algorithms might allow a machine to experience. This means the Turing test is a waste of time.

----------


## desiresjab

> My main problem with the artificial intelligence position that desiresjab promotes is the attempt to reduce consciousness to unconscious algorithms. The Turing test depends on fooling people with the assumption that none of us are really conscious to begin with and we are all fooling each other. 
> 
> This means the Turing test is a waste of time.


Nowhere has anyone said that. There are no parameters to the Turing test which assume none of us is really conscious. Turing never mentioned it. I mentioned it. How does that suddenly mean the Turing test depends on none of us being conscious to begin with? It doesn't. Why are you tying the Turing test to a wild idea of mine?

What I have said is that the simulations for consciousness will become so good we will no longer be able to tell the difference by naked judgement. More extensive tests would have to be devised at some point. I am not sure how this came to mean in your mind anything about the quality of consciousness. My arguments are not concerned with the "actual" quality of future computers' consciousness, only that they possess a semblance of it good enough to perform certain threshhold actions.

Here is what you are overlooking in my ideas anyway. The cyborgs do not have to become conscious, man, they start out that way, as meat with a little metal added. There will be no question but that they are conscious, as they begin to outrun us in every human domain at an exponential pace.

----------


## desiresjab

Yes/No, I speculate that you are clutching certain ideas I have partially let go of. One of these is that the pooping organism known as human has a special relationship with God. I think you want to believe very badly that this relationship will carry us through to an afterlife. Who doesn't want that? I have to know that is why you are so intent on believing everything is full of consciousness, for instance. You also have a need to protect human consciousness as something unapproachable from below. What would it mean if you were wrong--if consciousness is approachable form below and our poop chutes have no special relationship with God?

Every bit of it with me is speculation, but with you some of it is belief. I cannot knock anyone for that. Maybe I even like it. Sometimes I may sound like I believe one way or another, but that is not really so. Leaning is possible when you are in the middle.

Of the four or five things I would call beliefs, most of them have been dug out and admitted right in this thread. 

1 There are more things under heaven and earth than your philsophy has ever dreamed of.

2 A universe where two does not "follow" one is not possible.

3 Both racial and personal memory is woefully sparse for a race of history buffs.

4 Science has more chance of saving us than religion.

5 There is not enough evidence to be a believer or a disbeliever.

----------


## YesNo

> Nowhere has anyone said that. There are no parameters to the Turing test which assume none of us is really conscious. Turing never mentioned it. I mentioned it. How does that suddenly mean the Turing test depends on none of us being conscious to begin with? It doesn't. Why are you tying the Turing test to a wild idea of mine?


It's is not your idea that I am thinking of but why the Turing test should be considered of much importance. This test is passed if a certain percentage of those interacting with the machine are fooled and think the machine is human. Why should that imply that the machine is conscious unless consciousness is equated with fooling others that something is conscious? 

Again, I refer you back to Searle's Chinese room critique of the Turing test. Our consciousness is different from that obtained by following an algorithm which is all these machines are able to do. Based on Searle's critique Turning tests are obsolete with regards to issues of consciousness. They are at most scores robot manufacturers can use to brag about the quality of their products.




> What I have said is that the simulations for consciousness will become so good we will no longer be able to tell the difference by naked judgement. More extensive tests would have to be devised at some point. I am not sure how this came to mean in your mind anything about the quality of consciousness. My arguments are not concerned with the "actual" quality of future computers' consciousness, only that they possess a semblance of it good enough to perform certain threshhold actions.
> 
> Here is what you are overlooking in my ideas anyway. The cyborgs do not have to become conscious, man, they start out that way, as meat with a little metal added. There will be no question but that they are conscious, as they begin to outrun us in every human domain at an exponential pace.


If the cyborg's "meat" that allows him or her to remain alive contains human DNA then the cyborg is a member of the human species and is one of us. It is like someone who has his or her smart phone more physically attached to the body.

----------


## YesNo

> You also have a need to protect human consciousness as something unapproachable from below.


It is more a difference between metaphysical idealism and metaphysical materialism. You seem to have a need to destroy human consciousness by reducing it to an algorithm that an unconscious machine follows. 

I, on the other hand, have no problem with subatomic particles being conscious in their own way. After all, when asked about their positions or momenta they seem to make a choice within the constraints of their dispositions to respond. All we can expect to know, ever, is the probability distribution of their choices (wave function). 

That might be our difference in perspective. You see a machine following an algorithm as conscious, but you likely don't see the atoms making up the machine as conscious. I see the subatomic particles making up the machine as conscious in their own way, but I don't think the machine itself is conscious.

----------


## desiresjab

I will try to respond to both posts in one.

If artificial intelligence became good enough to fool humans consistently, it would then be up to other "slave" machines, to determine for us if certain individuals were real or artificial. Once a machine could fool other machines, what then? There would be no easy way of determining if it was conscious or not. They would not conceal themselves behind screens, they would walk among us openly. The only way to know for sure would then be invasive surgery.

You are right, these meat machines will be human. They may even be able to mate with humans. Some of their "metallic" qualities may even be passed on. "Mixed" individuals might turn out to be superior to either article.

Or the meat machines might inaugurate a new species unable to mix with natural humans but able to breed among themselves.

In the first case the human species gradually becomes meat machines because of the advantages. In the second scenario we might have the type of problem a million sci-fi movies have already depicted for us.

For a moment lay aside the question of whether these machine intelligences have real consciousness. If they could stand face to face and talk to a human undetected, they would become confidants at the least.

Even if they are nothing more than highly sophisticated self-programmable machines, that moment in their program history could easily come where the existence of humans seems to them to interfere with their prime directive. This is another idea out of pulp science fiction.

I think these beings are coming, and in the not too distant future. We will struggle with all the definitions and ethics when they get here. I imagine the debates will continue for some time among our descendants as to how they should classify their machines, much as we have done here.

But relaistically, do you want one of these guys in the foxhole with you, or would you prefer a normal human? Your chance of survival will be better with the borg and he commiserates, too. The same goes for any dangerous activity from mountain climbing to piloting airplanes. None of this requires true consciousness, but only the ability to fit in.

We have no doubt that our machines today are not conscious. Our descendants will likewise have no doubt their cyborgs _are_ conscious. It is merely a matter of getting used to them, depending on them, taking their sympathy. Definitions of consciousness will continue to change and adapt until machine consciousness is fit somewhere into the scheme we are comfortable with.

A scary moment to consider is when these machines might start debating how to reclassify us.

----------


## YesNo

> You are right, these meat machines will be human. They may even be able to mate with humans. Some of their "metallic" qualities may even be passed on. "Mixed" individuals might turn out to be superior to either article.


If their metallic qualities are passed on then that would violate neo-Darwinism. I don't think neo-Darwinism is correct, but I suspect you might. That is why I am bringing it up.

If they can mate with humans, then they are human. I am going by a definition of "species" that I think Niles Eldredge would support based on his theory of punctuated equilibria which is a theory of evolution that makes sense.




> In the first case the human species gradually becomes meat machines because of the advantages. In the second scenario we might have the type of problem a million sci-fi movies have already depicted for us.


What advantages are there with having technology physically attached to your body? It seems that would make upgrades difficult. I certainly wouldn't want my smart phone embedded inside my body.




> For a moment lay aside the question of whether these machine intelligences have real consciousness. If they could stand face to face and talk to a human undetected, they would become confidants at the least.


If you are referring to the human "cyborgs", then they would have consciousness. If you are referring to machines driven by algorithms, then they do not.




> Even if they are nothing more than highly sophisticated self-programmable machines, that moment in their program history could easily come where the existence of humans seems to them to interfere with their prime directive. This is another idea out of pulp science fiction.
> 
> I think these beings are coming, and in the not too distant future. We will struggle with all the definitions and ethics when they get here. I imagine the debates will continue for some time among our descendants as to how they should classify their machines, much as we have done here.
> 
> But relaistically, do you want one of these guys in the foxhole with you, or would you prefer a normal human? Your chance of survival will be better with the borg and he commiserates, too. The same goes for any dangerous activity from mountain climbing to piloting airplanes. None of this requires true consciousness, but only the ability to fit in.
> 
> We have no doubt that our machines today are not conscious. Our descendants will likewise have no doubt their cyborgs _are_ conscious. It is merely a matter of getting used to them, depending on them, taking their sympathy. Definitions of consciousness will continue to change and adapt until machine consciousness is fit somewhere into the scheme we are comfortable with.
> 
> A scary moment to consider is when these machines might start debating how to reclassify us.


From the definition of "cyborg" that I am picking up from this discussion, we don't need to wait for our descendants to pass judgment: they would be conscious because they are humans with technology physically attached to them.

I am curious what you think about quantum particles. Are they conscious or not in your metaphysics? If not, how do you make sense out of the choices they make when asked their positions or momenta? 

Also, I would be curious to know what _scientific_ or _philosophical_ references you have to back up your metaphysics. I don't mean science fiction, speculations, belief systems or other forms of modern mythology, but real science or philosophy, the kind that can be cited and then examined critically. 

I think we need some external reference to ground and further the discussion. For my part I have offered Searle as an antidote to belief in the value Turing tests. Eldredge comes to mind for evolution. There are various surveys of quantum physics that might help. I have referenced Dean Radin for psi phenomena and I could add Raymond Moody for accounts of near and shared death experiences. You did offer an article by Crawford. How does that relate to your ideas? Why is he of interest to you?

----------


## desiresjab

> If their metallic qualities are passed on then that would violate neo-Darwinism. I don't think neo-Darwinism is correct, but I suspect you might. That is why I am bringing it up.
> 
> If they can mate with humans, then they are human. I am going by a definition of "species" that I think Niles Eldredge would support based on his theory of punctuated equilibria which is a theory of evolution that makes sense.
> 
> 
> 
> What advantages are there with having technology physically attached to your body? It seems that would make upgrades difficult. I certainly wouldn't want my smart phone embedded inside my body.
> 
> 
> ...


You keep making authoritative statements I cannot accept. 

"If you are referring to the human "cyborgs", then they would have consciousness. If you are referring to machines driven by algorithms, then they do not."

Just like that. It is impressive you know that our own consciousness is not algorithm driven. How did you find that out?

----------


## desiresjab

I suppose I would want to know exactly what you think my metaphysics is. I stated all my beliefs a few posts ago. I do not have an opinion on whether electrons are conscious. Remember, it is you who need this opinion. Why are you willing to grant consciousness to an electron, but ready for a death match when it comes to merely considering it for a futuristic machine which only operates on algorithms? Electrons, apparently operate from a more lofty paradigm than mere machine algorithms.

You need to insist there is some kind of magical, mystical threshhold to consciousness, unapproachable, uncreatable by all but god. I do not share this obsession. To you consciousness seems now vested with what is holy about the universe. You have that disguised need for what is holy. I think we all have that need. I try not to let it interfere with my process. 

I know what it does, it clears the way for now saying consciousness had no beginning. That consciousness is at the heart of operations in the universe and always has been. That is another supposition I am not pressed to make.

I do not say electrons do not have consciousness, either. But on our thin knowledge of quantum operators I cannot jump out there simply because I would like for something to be true. I only take a stance when there is no other logical choice. Do I have a preference all of the time? Yeah. But not a stance. My stance is that there isn't very good evidence for a firm stance.

----------


## desiresjab

I can't wait for replies.

Consider a mosquito. What is its consciousness? I must add another belief to my stated arsenal and say I do not believe it is self conscious. The mosquito is not aware that it is conscious. It does not catch itself thinking. How do I know? I know where a bear makes duty, too.

Is the mosquito really conscious at all, would probably be a reasonable question? If reacting to temperature and hunger pangs and a host of pre-programmed instincts is all that is required, I feel superbly confident those criteria can already be met by our machines. Such are the "decisions" of a mosquito. 

Does a dog catch itself thinking? Can't claim to know. But there must be some threshhold more significant than ordinary consciousness, if indeed the mosquito is conscious. That threshhold must be self consciousness.

But will a machine not know when it has been turned on? There you go--self-consciousness. Perhaps we are closer than we choose to accept.

----------


## YesNo

> I suppose I would want to know exactly what you think my metaphysics is.


I don't know what it is. That's what I am trying to find out. For example, I assume you are an atheist. That's fine, but there are different kinds of atheists. I don't know to what extent you are a materialist.

Some atheists believe consciousness can be reduced to unconscious matter. I think this position given quantum physics has been discredited. It is no longer scientific. There are others who promote panpsychism such as Thomas Nagel. They admit that consciousness cannot be reduced to unconscious matter but in order for reductionist thinking to be valid, they must have consciousness at all levels including the quantum level. Then consciousness would not be completely emergent from unconscious matter. I see this as a form of dualism.

My view is idealism: unconscious matter does not exist. Everything is conscious. So panpsychism seems to make sense to me, however, I am not a reductionist. Consciousness not only goes down to the lowest forms of reality we are aware of, but also above beyond what we are aware of. The reason I am interested in this thread is I don't really know what I think is the case until I talk it out with someone who disagrees with me.




> I stated all my beliefs a few posts ago. I do not have an opinion on whether electrons are conscious. Remember, it is you who need this opinion. Why are you willing to grant consciousness to an electron, but ready for a death match when it comes to merely considering it for a futuristic machine which only operates on algorithms? Electrons, apparently operate from a more lofty paradigm than mere machine algorithms.


I don't need quantum reality to be conscious. It just seems that they are making choices and they might as well be considered conscious. However, I think Nagel's atheism would need something like this. He has to get consciousness at all the lower layers of reality. I just need it at the higher levels, but it could be everywhere.




> You need to insist there is some kind of magical, mystical threshhold to consciousness, unapproachable, uncreatable by all but god. I do not share this obsession. To you consciousness seems now vested with what is holy about the universe. You have that disguised need for what is holy. I think we all have that need. I try not to let it interfere with my process.


I am not trying to disguise anything. I have admitted being a generic panentheist although I don't profess any specific religion. I do yoga, meditate, recite mantras, but that is about it for religious practice. The need is real. Why? Because we wouldn't be here otherwise.




> I know what it does, it clears the way for now saying consciousness had no beginning. That consciousness is at the heart of operations in the universe and always has been. That is another supposition I am not pressed to make.


Right. Consciousness had no beginning. The universe did. Consciousness is why the universe is here. That is why I am interested in the Big Bang. The Big Bang attempts to show _without using consciousness_ that the universe could have been created from nothing. I want to see to what extent that "without using consciousness" part is necessary. It is a challenge to my idealist position. That would be my interest in papers such as the one you cited by Crawford.




> I do not say electrons do not have consciousness, either. But on our thin knowledge of quantum operators I cannot jump out there simply because I would like for something to be true. I only take a stance when there is no other logical choice. Do I have a preference all of the time? Yeah. But not a stance. My stance is that there isn't very good evidence for a firm stance.


The only reason I think an electron could be conscious in its limited way is that its behavior can be modeled as a _choice_. It looks conscious more so than a machine that passes the Turing test does. Sure, that machine might fool me, but once I know it is a machine and operating under deterministic or even random algorithms, then I know it is not conscious based on Searle's Chinese room argument. With the electron I can't reduce its behavior to either deterministic or random processes. There are no hidden variables at that level of reality. So, I can't invoke Searle's argument against its consciousness.

----------


## YesNo

> Consider a mosquito. What is its consciousness? I must add another belief to my stated arsenal and say I do not believe it is self conscious. The mosquito is not aware that it is conscious. It does not catch itself thinking. How do I know? I know where a bear makes duty, too.


I don't know what it's consciousness might be. It moves around and avoids my hand when I try to brush it away. The only criteria I have for saying that something that is initiating changes is not conscious is whether those changes are determined by an algorithm. I don't think a mosquito or even a virus is so determined. Why? Because a quantum particle is not so determined and they are much smaller.




> Is the mosquito really conscious at all, would probably be a reasonable question? If reacting to temperature and hunger pangs and a host of pre-programmed instincts is all that is required, I feel superbly confident those criteria can already be met by our machines. Such are the "decisions" of a mosquito. 
> 
> Does a dog catch itself thinking? Can't claim to know. But there must be some threshhold more significant than ordinary consciousness, if indeed the mosquito is conscious. That threshhold must be self consciousness.
> 
> But will a machine not know when it has been turned on? There you go--self-consciousness. Perhaps we are closer than we choose to accept.


Thomas Nagel considered similar questions in an essay "What is it like to be a bat?" http://organizations.utep.edu/Portal.../nagel_bat.pdf I have not read it with enough care, but his essay "Panpsychism" in "Mortal questions" I have paid more attention to.

----------


## desiresjab

Let me state for the nth time I consider myself an agnostic. That's okay, it probably seems hard to believe at times. I can lean both ways, which gives me the flexibility to play devil's advocate in either direction. If you were not advocating some of these ideas I would have to be doing it myself. I think they are great ideas.

What I want out of this is sight. Such a journey is made almost entirely alone in the company of others. I want a picture to believe in. Strangely, people find such pictures for themselves all the time. The mind can make almost anything it wants. I don't ask for much--I just want _the_ picture. I view much lovely art, but I do not think I have seen _the_ picture yet.

We can go far without the right picture. I do not actually need to find the right picture, I only need to be allowed to search for it forever.

If we knew these things that drive our curiosity we would no longer be curious.

To say electrons have consciousness "of a type," is a lot of wobble room.

I go back to my mosquito friend. It _may_ be conscious but it is not self conscious. What is consciousness without self consciousness but a bunch of programmable instincts entered as parameters which determine behavior? We could easily model that, and I believe the machine would be every bit as conscious as the mosquito, if not a little more. The mosquito, always starting anew, has no concept of yesterday or time past, but the computer does.

Our machines already have primitive consciousness, what they lack are actual emotions we would construe as genuine. Concepts like _willfulness_ and _awareness_ are pretty abstract and vague anyway, not to mention consciousness itself. Words we used forever without bothering to define them precisely. What is consciousness to you?

I doubt that the mosquito is even conscious. It has an on and off state. It reacts, it tries to preserve itself--but does it need consciousness for that? It does not _think_ of itself, it does not _reflect_ on its thoughts. I do not believe the mosquito is making willful choices, but following its programming, so of course I find reason to doubt the electron too. Making a choice really is a matter of definition.

Some people do not find it appealing that consciousness could arise from lump matter. I find it immensely appealing that so much could be enfolded that would never be suspected.

----------


## YesNo

> Let me state for the nth time I consider myself an agnostic. That's okay, it probably seems hard to believe at times. I can lean both ways, which gives me the flexibility to play devil's advocate in either direction. If you were not advocating some of these ideas I would have to be doing it myself. I think they are great ideas.
> 
> What I want out of this is sight. Such a journey is made almost entirely alone in the company of others. I want a picture to believe in. Strangely, people find such pictures for themselves all the time. The mind can make almost anything it wants. I don't ask for much--I just want _the_ picture. I view much lovely art, but I do not think I have seen _the_ picture yet.
> 
> We can go far without the right picture. I do not actually need to find the right picture, I only need to be allowed to search for it forever.
> 
> If we knew these things that drive our curiosity we would no longer be curious.


I will assume you are agnostic rather than atheistic. I don't have the picture either and what I understand now will likely change. 




> To say electrons have consciousness "of a type," is a lot of wobble room.


Yes. It is very vague. I don't know what consciousness is. An electron is not conscious or aware the way I am. 




> I go back to my mosquito friend. It _may_ be conscious but it is not self conscious. What is consciousness without self consciousness but a bunch of programmable instincts entered as parameters which determine behavior? We could easily model that, and I believe the machine would be every bit as conscious as the mosquito, if not a little more. The mosquito, always starting anew, has no concept of yesterday or time past, but the computer does.
> 
> Our machines already have primitive consciousness, what they lack are actual emotions we would construe as genuine. Concepts like _willfulness_ and _awareness_ are pretty abstract and vague anyway, not to mention consciousness itself. Words we used forever without bothering to define them precisely. What is consciousness to you?
> 
> I doubt that the mosquito is even conscious. It has an on and off state. It reacts, it tries to preserve itself--but does it need consciousness for that? It does not _think_ of itself, it does not _reflect_ on its thoughts. I do not believe the mosquito is making willful choices, but following its programming, so of course I find reason to doubt the electron too. Making a choice really is a matter of definition.
> 
> Some people do not find it appealing that consciousness could arise from lump matter. I find it immensely appealing that so much could be enfolded that would never be suspected.


I don't know how the words "consciousness", "self-consciousness", "willfulness" and "awareness" serve to differentiate something. I can only experience my own awareness, willfulness, consciousness and self-consciousness and they are all filtered through my being a member of the human species. 

Every species has its own set of constraints on how it can interact with the world. We have different constraints than a mosquito but that doesn't mean the mosquito is any less able to make a choice within its own species constraints. All individuals within species, including humans, have dispositions to act in certain ways, but that doesn't mean there is no choice available. I choose between chocolate or vanilla ice cream perhaps disposed today to pick chocolate 60% of the time. The wave function for that choice would imply those probabilities today if one bothered creating it. But I still make a choice. The mosquito is disposed to move in multiple directions when my hand approaches. I have no reason to claim it has no choice in the matter except to support a metaphysical belief that it can be reduced to a machine.

However, I don't think a machine has any choice. And I can say that with more certainty than I can say anything about the mosquito because I can trace back the machine's programming to a real programmer. I disagree with using this programming metaphor when talking about reality that someone has not actually programmed. Programming implies the existence of a programmer or an "intelligent designer". If I cannot identify a programmer through some historical records, there is no justification to say something was programmed.

I know some theists like how the programming metaphor implies the existence of an intelligent designer, but I think accepting that metaphor comes at too great a cost. I am not a 19th century Christian apologist facing a scientific view that is totally deterministic with an underlying reduction to unconscious matter. That scientific view changed within science almost a hundred years ago. With determinism undermined there is no need to continue with intelligent design. Whatever God is real, He or She is far more interesting.

----------


## desiresjab

The mosquito weighed the options carefully, then went right instead of left.

----------


## desiresjab

I do not see water charging from a dam's gateway into a divided sluice-way having any more choice about where it goes than it had on its own composition. But give me all the weights, angles and forces involved, and I will give you back a probability wave for a particular molecule's likely "choice" of sluices.

What constitutes a choice and what constitutes consciousness appear to be mere definitions, when one peers behind Oz's curtain.

If it is left up to me, then, I will make the line of demarcation at self consciousness--the ability to reflect on what one is thinking about. Anything else would be defined as pre-consciousness. The worm and the mosquito are pre-conscious, but other life forms have varying degrees of consciousness. Saying just which ones, though, turns out to be difficult.

----------


## YesNo

> I do not see water charging from a dam's gateway into a divided sluice-way having any more choice about where it goes than it had on its own composition. But give me all the weights, angles and forces involved, and I will give you back a probability wave for a particular molecule's likely "choice" of sluices.


Probability is useful in two different contexts. One is where we do not have all the information we theoretically could have and so make a prediction based on what we do know. That is the kind associated with the water example you mentioned above. 

The other is the kind where there is no additional information to obtain, no hidden variable is left to consider. We have all the information and the results are still indeterminate. That is the kind associated with quantum uncertainty. Some say this is "random", but that is a misleading metaphor. The distributions also are not uniform such as the flip of a coin. This makes it difficult to work with the uncertainty and why speculations such as many worlds have a hard time establishing themselves as consistent. That is, they could be modeled as choices and not reduced to mere chance.

At the quantum level it is easy to know that we have all the information. At the level of our species or any other species it is not clear whether a choice was made (inherent indeterminacy) or whether additional information could have predicted the results accurately. I assume any living organism can make a choice within the constraints set up by its species. Others don't make that assumption. I think the assumption is justified based on the fact that at an even lower level, quantum indeterminacy exists.




> What constitutes a choice and what constitutes consciousness appear to be mere definitions, when one peers behind Oz's curtain.


I agree. 




> If it is left up to me, then, I will make the line of demarcation at self consciousness--the ability to reflect on what one is thinking about. Anything else would be defined as pre-consciousness. The worm and the mosquito are pre-conscious, but other life forms have varying degrees of consciousness. Saying just which ones, though, turns out to be difficult.


I think that might be too restrictive.

Some changes do not involve consciousness. I grant that. The changes that a machine or computer make are not conscious. We really don't want our machines making independent choices. They can be explained by their design or programming. As Nagel would put it when he asked what is it like to be a bat, it would be like nothing to be a computer. That means the computer is not conscious. What this goes against is a behaviorism which tries to reduce consciousness to objective forms of behavior. Consciousness is subjective, not objective.

Rather than behavior, I would place the line of demarcation at the ability to make a choice. I will define a choice as a behavioral change for which indeterminacy has to be accepted. A choice is something that cannot be completely determined given all the information. The most we can construct is a probability distribution. This would exclude the flow of water through the dam's gateway. It would exclude a computer passing the Turing test. It would exclude weather patterns. However, it would include any living creature until we get more information to prove that they can be reduced to machines. It would also include quantum reality.

I think Descartes put the demarcation where you would like to put it, but I can't remember exactly.

----------


## desiresjab

> Probability is useful in two different contexts. One is where we do not have all the information we theoretically could have and so make a prediction based on what we do know. That is the kind associated with the water example you mentioned above. 
> 
> The other is the kind where there is no additional information to obtain, no hidden variable is left to consider. We have all the information and the results are still indeterminate. That is the kind associated with quantum uncertainty. Some say this is "random", but that is a misleading metaphor. The distributions also are not uniform such as the flip of a coin. This makes it difficult to work with the uncertainty and why speculations such as many worlds have a hard time establishing themselves as consistent. That is, they could be modeled as choices and not reduced to mere chance.
> 
> At the quantum level it is easy to know that we have all the information. At the level of our species or any other species it is not clear whether a choice was made (inherent indeterminacy) or whether additional information could have predicted the results accurately. I assume any living organism can make a choice within the constraints set up by its species. Others don't make that assumption. I think the assumption is justified based on the fact that at an even lower level, quantum indeterminacy exists.
> 
> 
> 
> I agree. 
> ...


If something cannot recognize it is making a choice, I don't think it did. I do not accept gobbledy-gook language such as "_it makes a choice within the constraints of its species_."

_A choice is something that cannot be completely determined given all the information,_ is another one, that bothers me. Which information? All information? I don't have to accept that behavior is indeterminate at all, just very complex.

----------


## YesNo

> If something cannot recognize it is making a choice, I don't think it did. I do not accept gobbledy-gook language such as "_it makes a choice within the constraints of its species_."


By constraints of a species I am just pointing out that we are limited by the species we belong to. We can't hear, for example, all the sounds that members of other species can hear, nor see the same range of the electromagnetic spectrum as members of other species. Those are constraints that are specific to a species. Bats for instance hang by their feet and fly. We don't.

So the choices we can make are limited by these constraints. 

That is all I am trying to say.




> _A choice is something that cannot be completely determined given all the information,_ is another one, that bothers me. Which information? All information? I don't have to accept that behavior is indeterminate at all, just very complex.


This goes back to uncertainty in quantum physics. It depends on having all the information that is available to us and still we cannot determine what the quantum particle will do exactly. In other words, we can't repeat the experiment expecting to confirm the result. We can only assign a probability to what might happen. For example, measure the position of an electron. Then measure the momentum. Then go back and measure the position. That second measurement of position cannot be predicted exactly. 

I _define_ situations like that as having enough freedom to make a choice. It is just a definition based on my ability to determine the outcome of a measurement. I don't know if the electron has a subjective state, but based on this definition of choice, I will then _assume_ the electron has a subjective state. Having a subjective state based on Nagel's "What is it like to be a bat?" is just the claim that it is like something to be an electron, but I don't know if that is the case or not. It is only what I could derive logically from the definition of choice and assumption of subjectivity. If I keep deriving statements hopefully I will be able to come up with a statement that I can test empirically. At that point I could claim that this budding theory is falsifiable.

----------


## desiresjab

If I thought for sure there was a God, I would be mighty angry at that entity for leaving me so ignorant, for expecting me to believe garbage like the Koran on faith.

God owes me an explanation, and not the phony one in holy books. C'mon, almighty, you can do a little better than that, or are you just a two-bit, limited God who gets your jollies torturing those you call your children? Explain yourself, rat!

The rat God, that is who we worship. The rat God who expects more than is reasonable. It seems to me the devil and God are the same entity.

God is so wicked that those old books that try to put a good face on him fail miserabley. He still comes out Joseph Stalin. It is time for God to declare himself openly or GTFO of our universe.

Oh, I must be very angry today.

----------


## YesNo

> Oh, I must be very angry today.


Walks and slow breathing sometimes help. 



Regarding cosmology, I recently found out that black holes might not exist. 

On the one hand there are theoretical objections to their existence. 

Here is one claiming that Hawking does not believe in them around January, 2014: http://news.nationalgeographic.com/n...ace-astronomy/

In September, 2014, there was also a report from Mersini-Houghton that they cannot exist: http://phys.org/news/2014-09-black-holes.html

On the other hand there is one failed prediction assuming the radio source at the center of our galaxy, Sgr A*, is a black hole:

In early 2014 a cloud called G2 was supposed to head into Sgr A* and calculations based on black hole theory claimed the black hole would absorb the cloud. Wikipedia (https://en.wikipedia.org/wiki/Sagitt...cretion_course) described the event and reports that the gas cloud survived the encounter. Of course, Sgr A* may still be a black hole and the theory or calculations used in the prediction just need some modifications.

But all of this makes me wonder if Sgr A* is really a black hole and whether black holes are even possible.

----------


## desiresjab

> Walks and slow breathing sometimes help. 
> 
> 
> 
> Regarding cosmology, I recently found out that black holes might not exist. 
> 
> On the one hand there are theoretical objections to their existence. 
> 
> Here is one claiming that Hawking does not believe in them around January, 2014: http://news.nationalgeographic.com/n...ace-astronomy/
> ...


I will have to get back to you after I have read the links. (I actually do read them). I have very similar doubts about black holes. They were a possibility found in the field equations for the first time by Billy Sidis, I believe, when he was a fourteen year old at Harvard.

As we know, the mathematical existence of something does not prove something physically exists.

----------


## desiresjab

After all, infinite mass at a singularity is a mathematical convenience or extrapolation. No one can really believe in infinite mass, but maybe in something we say mathematically _approaches_ it.

Black holes are one of our most beloved cultural icons by now. Let's see how easily we can shake them.

When one author says he has proven mathematically the impossibility of black holes, that was under a particular model under a particular system of constraints. I truly doubt it is as final as a universal proof yet.

The abstracts below some of the papers point out the tools they are using. No surprise that all the tools were named after great mathematicians. Because that is the only way we make progress, after all the talking. Hamilton quaternions, Hilbert spaces and Abelian groups and matrices in a framework of Lie algebra. It does not get more clear than that what the tools are.

Science will take an awful public flogging when it needs to change its paradigm again. Many powerful people believe cosmological research is a waste of time and money anyway. They would like strictly practical research with no long range vision. Do we really care about better and better cellphones if we cannot search for our origins?

----------


## desiresjab

You have to admit, this subject is more interesting than social philosophy, which is now PC from top to bottom. I sure hope there are not too many suicides by marginalized people unable to cross the George Washington Bridge one more time. You know, them danged monuments is going to have to come down too.

I took a recent peer at Einstein's field equations. They are not as daunting as some mathematics I have looked at. I see partial differential equations. Anyone familiar with differential equations and what the variables stood for would basically understand the mechanics of the math. Of course understanding those variables well would require a lot of advanced physics. There are powerful ideas relating to many fields of mathematics and physics embedded in those innocent looking equations. Somewhere within the manipulations all the tools I mentioned in the last post are likely to come into play along with many more. Unfamiliarity with any of these individual elements dooms deep understanding of the subject. That is why almost all of us will remain railbirds to the real action, including myself, of course...

----------


## YesNo

One doesn't have to understand these equations at more than a superficial level. As I see them they are just maps of some objective part of reality.

I'm looking through a few books to try to understand the link between black holes, dark matter and the singularity at the big bang. My suspicion at the moment is that there is only circumstantial evidence for black holes and the encounter of G2 with Sag A* would have been the only direct evidence regarding them so far. Also I think dark matter comes from missing matter assumed to be there from the big bang, but does not seem to be needed for gravitation within our own galaxy. What I am trying to find out is how much of all of this is speculation and how much is based on real observations.

----------


## Dreamwoven

Some links from EarthSky.com on all this about black holes, dark matter and the big bang. The last is about a nearby Dark Matter Galaxy: go figure.
http://earthsky.org/space/dark-matte...c77b-394044013
http://earthsky.org/space/the-cheshi...p-of-galaxies?
http://earthsky.org/space/a-nearby-d...c77b-394044013

----------


## desiresjab

> Some links from EarthSky.com on all this about black holes, dark matter and the big bang. The last is about a nearby Dark Matter Galaxy: go figure.
> http://earthsky.org/space/dark-matte...c77b-394044013
> http://earthsky.org/space/the-cheshi...p-of-galaxies?
> http://earthsky.org/space/a-nearby-d...c77b-394044013


With so many results in hand and people pushing different models, the real job is in sifting through what we already have and running the appropriate experiments where possible.

Filaments of dark matter emerging from the earth might be a testable idea, since the highest concentration of the hair roots lies only 600,000 miles out in space from us.

----------


## YesNo

As I understand it, one of the difficulties of finding dark matter, which I doubt exists, is that although there is more of it than regular matter it is more smoothly distributed through space. It doesn't collect since it can't stick together not reacting to electromagnetic forces. Without those forces, it goes through everything. So it is diffuse, but the Sun which is relatively close to us is locally so massive the amount of dark matter is too small compared to it to be picked up by our measurements. However, I think that should be still the case at the galactic level.

It occurred to me that dark matter is like a field. One should be able to replace it with a variable G value and not have to worry about supposing there is a form of matter out there.

----------


## desiresjab

Given our record with less advanced cultures, a cosmic speed limit at the speed of light imposed on our kind makes sense from the point of view of beings advanced enough to actually live up to the Star Trek prime directive.

I think I should point out again that Curvature Cosmology eliminates the need for dark matter and dark energy. As one of Dream Woven's links pointed out, black holes are not a reality in some workable models.

The possibility that vastly distant celestial objects might be equally well expressible under dual and opposite exclusive interpretations raises the possibility again that there may be no further reality than our models. Whatever we think of as "out there" is only a matter of interpretation and which model we use, and either model is eqaully valid, like waves and particles are in the study of light. There is something vaguely scary about that.

----------


## desiresjab

We are animated postulates.

----------


## YesNo

Here is Stephen J. Crothers explaining why there aren't any black holes: https://www.youtube.com/watch?v=jINHHXaPrWA It's a little long, but I found it entertaining. It looks like the only way to bring theoretical physicists down to earth when they start dividing by zero is satire.

I don't know if all of the competing models are equally valid, but until one gets some experimental data there is only the model's internal logic and one's personal metaphysics to justify it. I understand the Planck data has not provided for example any evidence of a multiverse and the lambda CDM standard cosmology model is still standard from this lecture by Charles Lawrence: https://www.youtube.com/watch?v=ZCZdrfDHwgU This one is also long.

----------


## desiresjab

Good stuff. Only viewed the first one so far. Crothers is someone at the spear point of research. Great to hear him speak with such authority. Much of his talk inadvertantly illustrates another idea I have been talking about. He and his fellows are still hashing out the details of papers published in 1916. They are translating overlooked Russian papers from the 1940's. You see what I mean? See how long it takes humans to hash out truth in these difficult matters. With cyborg researchers, everything we are painstakingly piecing together would be accomplished in a matter of days or even hours.

----------


## YesNo

I like Crothers also. As I see it, we are the cyborgs. 

The belief in dark matter can be traced to big bang nucleosynthesis, BBN, (distinct from later stellar nucleosynthesis) and the discrepancy between the baryon-photon ratio observed in the microwave background and the way galaxies appear to behave now: https://en.wikipedia.org/wiki/Big_Bang_nucleosynthesis It is interesting that the standard BBN theory matches the data for hydrogen, helium, but not for lithium, however, the non-standard models apparently mess things up even more. It is good to see data putting limits on speculation.

----------


## desiresjab

I have to reply before I have properly viewed everything. But I must say I wish Crothers had gone even longer. He taught me some things about the tensor calculus supercripts and subscripts. As I inch my way along, I can get closer to the mathematical ideas that inspire these guys. There are some gaps in my understanding so confusing I cannot even point out what they consist of yet. But as I creep along, unexpected tiny parts of the cavern are suddenly illuminated from various investigations. Maker of collages, I take what I can to patch my understanding of existence into a coherent picture. I am sure that is what we all do.

I feel the cynic coming on. Contemplation of diverse models which each offers a reasonable view of the universe can do that, methinks. It's enough to make one go religious!

The thing is, most of those models got one or more things wrong. They each have flaws. There may be no such thing as truth. Can't define existence, can't define consciousness, can't define intelligence--we have a ways to go. The search itself is what is exciting, and I am disappointed that the vast majority of people miss out on all of the excitement and anticipation that goes with cosmology watching.

----------


## Dreamwoven

I'm following the cosmology debate, and enjoying it!

----------


## Dreamwoven

Here is a collection of conundrums for your interest: http://www.space.com/topics/expert-v...lawrence-kuhn/

----------


## desiresjab

> Here is a collection of conundrums for your interest: http://www.space.com/topics/expert-v...lawrence-kuhn/


I took some interesting rides there. At the moment I list slightly toward our being a type of simulation or auxillary consciousness.

----------


## YesNo

> Here is a collection of conundrums for your interest: http://www.space.com/topics/expert-v...lawrence-kuhn/


I looked at Kuhn's video about information in this link: http://www.space.com/29477-did-infor...he-cosmos.html

I liked the way he presented the issue about what is fundamental and that there should be something that is fundamental. Also the title was interesting: "Does Information Create the Cosmos". That's the challenge. What is the most fundamental reality that creates what we experience right now?

Rather than "information", I would say the fundamental reality is "consciousness", in a general sense, not our human example of it, that is the fundamental reality. However, that would require finding a way to get electrons and photons from consciousness. This is not simply that they are "conscious" themselves in some way, but that they are a manifestation to us of consciousness. I don't know how that happens, but I don't think information works as the fundamental reality because of Searle's Chinese room argument and Nagel's discussion of what it is "like" to be something subjectively.

----------


## YesNo

Another one of Kuhn's topics is consciousness: http://www.space.com/30937-when-robo...conscious.html

This is a good overview containing video interviews with some of the key people involved. What I found most interesting was Kuhn's separating the different positions into five categories. This is how I understand those categories.

1) Materialism. The only fundamental reality is unconscious matter. Consciousness emerges from unconscious matter.

2) Qualia Fields. Consciousness is not reducible to unconscious matter, but it is a separate field interacting with physical fields.

3) Panpsychism. Consciousness is not reducible to unconscious matter. Matter contains consciousness as a property at the smallest levels. There is some emergence involved to get from primitive to more complicated forms of consciousness.

4) Dualism. Unconscious matter and conscious souls exist interdependently. Both unconscious matter and consciousness are fundamental.

5) Idealism. Consciousness is the only fundamental reality. Unconscious matter does not exist but what appears to be unconscious emerges from an underlying consciousness. 

Most people would agree with either 1 or 4. They are either materialists or dualists. I would agree with 5. 

As I see the idealist position, not everything that we have a name for is conscious as that object, but the object emerges from simpler reality that is conscious. Consciousness is characterized by an ability to make a choice, no matter how constrained that choice would be. For example, a table is not conscious as a table, but the physical reality making up the atoms of the table are conscious or emerge from a deeper consciousness. Similarly the brain is not conscious as an objectively functioning brain although the cells within the brain would be conscious as cells. Nor is a robot conscious as a robot. If the object can be identified with a programmer who determined its behavior, then it is not conscious.

Positions 1, 2 and 3 are reductionist positions while 4 and 5 are not.

----------


## YesNo

I went to Kuhn's "Closer to Truth" site for more interviews: http://www.closertotruth.com/ The interviews are short and give me a better understanding of the key people involved by listening to them speak.

----------


## desiresjab

> I went to Kuhn's "Closer to Truth" site for more interviews: http://www.closertotruth.com/ The interviews are short and give me a better understanding of the key people involved by listening to them speak.


I used to watch the Kuhn show every chance I got.

----------


## YesNo

We got rid of cable some years ago. I never heard of Kuhn before this thread.

One of the psi experiments reminded me of the Turing test. Unfortunately I can't remember which person was being interviewed. The experiment used two people in different rooms. One was talking about whatever came to his mind. The other was just thinking about something he was viewing and tried to influence what the other was saying by using his thoughts. He could hear what the other was saying, but the speaking person could not see him. 

Also it looks like Ray Kurzweil didn't think that recent Turing test success met his standards: http://www.kurzweilai.net/ask-ray-re...he-turing-test

----------


## desiresjab

> We got rid of cable some years ago. I never heard of Kuhn before this thread.
> 
> One of the psi experiments reminded me of the Turing test. Unfortunately I can't remember which person was being interviewed. The experiment used two people in different rooms. One was talking about whatever came to his mind. The other was just thinking about something he was viewing and tried to influence what the other was saying by using his thoughts. He could hear what the other was saying, but the speaking person could not see him. 
> 
> Also it looks like Ray Kurzweil didn't think that recent Turing test success met his standards: http://www.kurzweilai.net/ask-ray-re...he-turing-test


Yikes! Look at this. I do not think I have posted it before.

http://www.tony5m17h.net/SiragMcKayE8ID.pdf

There are some prose sections to this long paper that are readable. This is the Lie alghebra they are using to try to connect consciousness to quantum mechanics. At first I was under the impression that they were using seven and a half billion dimensions. But, ah, no worry. These are regular square matrices. Do not be discouraged that one of the matrices would be seven miles wide if each entry position were given one inch of room. Get your pencil out and start calculating!

I have no idea exactly how difficult this Lie algebra might be, because it is hard to even get an inroad. But I do know that p-adic theory may be the toughest I have looked at in terms of mastery. In p-adic theory (which is used within Lie Algebra), 48 and 1000048 are exteremely close numbers when p=10. Not sure how close 47 and 1000048 are, but I believe they are not considered close at all.

Lie Algebra evolved out of differential geometry. It is simply amazing how closely Gauss is tied to all these advanced maths that developed, some long after his death. The hand of Gauss is still all over conteporary research.

----------


## YesNo

I didn't understand the paper. Since it referenced many worlds multiple times I assume it is false. Or possibly even a prank.

Edit: Looking more at the paper, I don't think it is a prank, but I don't trust it because of the many worlds references.

Here is a link to a more readable account of Frank Dodd (Tony) Smith, Jr's theory: http://www.soebooks.com/11/240-of-25...Theory-of.html

----------


## desiresjab

I presented that link as a look-see at what the top dogs are using mathematically. To say they are wrong would be the height of presumption for me, since I understand the math slightly better than a third grader understands algebra.

I actually could not tell you what the author believes, after perusing the article, only that some very sophisticated tools are at work. I have wanted to get a look at the how researchers were approaching unraveling any connection between quantum mechanics and consciousness. I was curious as to what kind of tool the great researchers would feel had some kind of chance at discovering these truths. Now I know. That is all. I do not side with anyone because I do not know enough to side with anyone.

----------


## YesNo

This article summarizes quantum mind: https://en.wikipedia.org/wiki/Quantum_mind

Smith's theory seems to be related to the Penrose-Hameroff position which I assume is still being pursued although its original version has encountered some falsification.

My gut feeling suggests that the brain is itself not conscious nor does it allow for consciousness to emerge from it. I don't think this is a quantum mind position, but I don't understand the quantum mind well. Regarding the brain, I suspect I would be closer to Chalmers on the issue. No physical theory, quantum or classical, can reduce consciousness to something objective and unconscious.

----------


## YesNo

I was looking for more information on quantum whatever and found this interview with Amit Goswami which made some sense to me: https://www.youtube.com/watch?v=bnQ63AOrs6s

It is about an hour. Some of the notes I took were:

1) Objects are possibilities.

2) Consciousness chooses without any signals.

3) There was some research done by someone called Greenberg (?) in 1993 that I would like to look up showing that human brains interact non-locally. EDIT: I think I found the paper: http://www.deanradin.com/evidence/Grinberg1994.pdf

4) I didn't understand his reference to Hofstadter's "Godel, Escher, Bach", but I will have to look further into that regarding "tangled hierarchies" which has to do with perceiving and memory both being necessary for each other to exist.

5) Nothing becomes something.

6) Sheldrake's morphogenic fields allow for something I was unclear about although I have heard about this concept before.

Anyway, just something else that is hopefully related to the topic.

----------


## desiresjab

These gentlemen all have a lot of pretty ideas. If consciousness cannot unfold from the brain, then it seems clear you do not believe in a creator. For the creator itself would have had a schematic in mind from which it created us.

----------


## YesNo

What do you mean by a "schematic"? I was just thinking how sexual reproduction is almost anti-machine-like. It is also communal. Machines are individualistic and isolated. 

Even quantum particles seem to behave as a group. Push them individually through a double slit and the final result forms the familiar wave pattern on the detection screen. Their choices seem to be based not only on their individual choices but on an overall group choice.

I also found reference to a neuroscientist, Mario Beauregard. His book, "Brain Wars", was in the library and I'm reading about placebos now.

----------


## desiresjab

> I was looking for more information on quantum whatever and found this interview with Amit Goswami which made some sense to me: https://www.youtube.com/watch?v=bnQ63AOrs6s
> 
> It is about an hour. Some of the notes I took were:
> 
> 1) Objects are possibilities.
> 
> 2) Consciousness chooses without any signals.
> 
> 3) There was some research done by someone called Greenberg (?) in 1993 that I would like to look up showing that human brains interact non-locally. EDIT: I think I found the paper: http://www.deanradin.com/evidence/Grinberg1994.pdf
> ...


I have only read the paper so far. Tough to emerge from it with any detail. Fortunately, the authors present word pictures to help dummys like me. Pretty impressive. I wish I were doing some of that research. To really understasnd it, I believe that would be necessary for me.

The experiments speak volumes in support of swamis and gurus who meditate. It seems the actual effecs are minimal, however. No one has meditated strongly enough to lift a battleship from the water and place it a mile inland. I do not see that as possible with any amount of "mind power."

I do not believe the Global Consciousness Project has any results to brag up. Whatever the effects of meditation and concentration are, it seems they are small and unable to affect the larger scale of the world directly. Not surprising.

Can minds throw objects from a distance without touching like superheros? Not so far. Will it ever happen? I will not say it is impossible. There are no superheros yet.

----------


## YesNo

If materialism were true, there should be no effects at all. Not even small ones. It is probably a good thing they can't lift battleships out of the water.

Beauregard's "Brain Waves" is an interesting summary of placebo/nocebo effects, neurofeedback, neuroplasticity, hypnosis, psi, out-of-body experiences and mystical experiences. If materialism were true, none of these should even be reported.

I also found a copy of Goswami's "The Self-Aware Universe". His views in the video I linked to earlier have made me wonder just what he promotes. He seems to be a monistic idealist, that is, someone who maintains that consciousness is the only fundamental reality. That is my position. I am hoping he has a better support for that position than I do.

----------


## YesNo

I just read in Goswami's book (page 21) that Turing himself admitted that psi would be a way for a machine to fail the Turing test. The following quote is from "COMPUTING MACHINERY AND INTELLIGENCE" http://www.loebner.net/Prizef/TuringArticle.html where Turing addresses the "Argument from Estrasensory Perception":

_I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one's ideas so as to fit these new facts in._
Turing wrote, "If telepathy is admitted it will be necessary to tighten our test up." He would have to devise some "telepathy-proof room" assuming that were possible. Perhaps the best way would be to not allow any questioning that tested for psi ability.

Since we discussed the Turing test earlier, I think this argument by Turing himself is better than Searle's Chinese room argument I used earlier.

----------


## desiresjab

> I just read in Goswami's book (page 21) that Turing himself admitted that psi would be a way for a machine to fail the Turing test. The following quote is from "COMPUTING MACHINERY AND INTELLIGENCE" http://www.loebner.net/Prizef/TuringArticle.html where Turing addresses the "Argument from Estrasensory Perception":
> 
> _I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one's ideas so as to fit these new facts in._
> Turing wrote, "If telepathy is admitted it will be necessary to tighten our test up." He would have to devise some "telepathy-proof room" assuming that were possible. Perhaps the best way would be to not allow any questioning that tested for psi ability.
> 
> Since we discussed the Turing test earlier, I think this argument by Turing himself is better than Searle's Chinese room argument I used earlier.


Have not had time to listen to everything yet. Been busy elsewhere.

Just some passing thoughts.

For something to have no coding implies...

It means it would have that in common with randomness...

If consciousness has no coding, is it possible nevertheless with approximation techniques to get as close as any epsilon one can name to true conscious behavior, the way we get very close to randomness with psuedo random techniques? 

I think perhaps so, but I realize in your view there must always remain an impassable gulf.

Psuedo random techniques fool people all the time, in sort of an analogy of the Turing Test. Psuedo randomness is the bread and butter of casinos.

----------


## YesNo

> Have not had time to listen to everything yet. Been busy elsewhere.
> 
> Just some passing thoughts.
> 
> For something to have no coding implies...
> 
> It means it would have that in common with randomness...
> 
> If consciousness has no coding, is it possible nevertheless with approximation techniques to get as close as any epsilon one can name to true conscious behavior, the way we get very close to randomness with psuedo random techniques? 
> ...


There is also something called a "reverse Turing test" that has more practical importance than the Turing test itself: https://en.wikipedia.org/wiki/Reverse_Turing_test

This is when you want to be able to tell if you are talking to a human being or some computer or program especially with an internet exchange of information. This is why you have to type in those characters distorted in an image when entering information on some web sites. So the practical problem may be how to tell with a reasonable probability that a human being is on the other side of the communication.

In the future one might be able to use psi as a test as well. This assumes that we all have some psi ability and that a test can be formulated that would be able to detect this with the probability of false positives being low enough to be acceptable.

----------


## desiresjab

> There is also something called a "reverse Turing test" that has more practical importance than the Turing test itself: https://en.wikipedia.org/wiki/Reverse_Turing_test
> 
> This is when you want to be able to tell if you are talking to a human being or some computer or program especially with an internet exchange of information. This is why you have to type in those characters distorted in an image when entering information on some web sites. So the practical problem may be how to tell with a reasonable probability that a human being is on the other side of the communication.
> 
> In the future one might be able to use psi as a test as well. This assumes that we all have some psi ability and that a test can be formulated that would be able to detect this with the probability of false positives being low enough to be acceptable.


If there is anything to quantum consciousness, I have to assume that traditional methods of meditation, prayer et al, must have reached the human limit of what is naturally attainable. As in other fields of investigation and endeavor, I would expect science to now take over, determine if there is anything to it, codify it and learn to considerably increase human psi under labratory conditions under the right stimuli. Such "techniques" would not be for the common man, at least not at first. Later they might be installed in a parlor game, some kind of futuristic analogy of the ouija board.

----------


## desiresjab

I just found the link below. This is a gateway video. It begins with a simple counting function defined by Ramanujan and clearly explains how this leads to the intricate math of modular equations. Along the way stand by for the uniting of many objects from diverse fields of higher math into a coherent and understandable picture. Such objects as p-adic and l-adic numbers find their applicability in successive scales of the Mandlebrot set. We talked earlier about how mysterious the function of p-adic numbers was. For anyone with enough understanding who gets through this video, the practicality of that metric system will not remain a mystery. I think there is even a reference to Lie or Clifford algebras.

Earlier in the discussion we referenced Kronecker, by noting he belonged to a by now minor school of mathematical philosophy that insists real numbers, and in particular whole numbers, are the basis of reality. As he famously said: _God invented the whole numbers, man invented all the rest_.

The progress of the problem of counting the additive partitions of a whole number as outlined in the video, definitely moves in a direction Kronecker would have applauded, from approximations to exact numbers as solutions. Of course this is precisely the direction mathematicians always aim to move in the first place.

Many of these tools are the same ones being used by modern physicists and cosmologists in their investigations. Group theory is another field that is brought into focus briefly by the video. It is one of those averarching concepts that unites phenomena as diverse as the numerical and the physical into common systems. Groups are critical in the study of elementary particles. The way many things can behave are explained by permutation groups, which essentially manipulate and map symmetries, putting every symmetry through its paces, so to speak.

https://www.youtube.com/watch?v=aj4FozCSg8g

----------


## YesNo

It was interesting seeing research on partition numbers in the video you cited.

----------


## Dreamwoven

There is an article in Space.com that suggests that half the identified planets by Kepler are false positives: http://www.space.com/31320-kepler-gi...positives.html.

----------


## YesNo

They were expecting there to be false positives, but not so many. It looks like we need a new and better telescope anyway since Kepler had a malfunction a couple of years ago. There are over 1000 that passed the test.

----------


## Dreamwoven

Do you know if one is being built and if so when it will be ready?

----------


## Dreamwoven

The Spitzer is operating, but I have no idea what it does: http://www.spitzer.caltech.edu

----------


## Dreamwoven

http://jwst.nasa.gov/about.htmlThe James Webb Space Telescope is another, but due to be launched in late 2018. Uses Infra-red detection.

----------


## YesNo

Here is a list of space telescopes. Some of them I hadn't heard of before. https://en.wikipedia.org/wiki/List_of_space_telescopes

----------


## Dreamwoven

That's a mind-blowing list. I had no idea...

I was enjoying your debate with _desiresjab_ so now I will go quiet and study the list of telescopes to try to get a new perspective on telescopy.

----------


## YesNo

> This is fascinating. Back to basics.
> 
> http://www.bookpump.com/bwp/pdf-b/9424134b.pdf


I was wondering what Crawford claimed Hubble's constant was. At the end of the paper he has this (page 83, I reformatted the numeric values):

Results for the topics of the Hubble redshift, X-ray background radiation, the cosmic background radiation and dark matter show strong support for curvature cosmology. In particular CC predicts that the Hubble constant is 64.4 +/- 0.2kms^-1 Mpc^-1 whereas the value estimated from the type 1a supernova data is 63.8 +/-0.5 kms^-1 Mpc^-1 and the result from the Coma cluster (Section 5.15) is 65.7 kms^-1 Mpc^-1.
The data from Planck (Feb 5, 2015) show the result as: http://arxiv.org/abs/1502.01589

These data are consistent with the six-parameter inflationary LCDM cosmology. From the Planck temperature and lensing data, for this cosmology we find a Hubble constant, H0= (67.8 +/- 0.9) km/s/Mpc, a matter density parameter Omega_m = 0.308 +/- 0.012 and a scalar spectral index with n_s = 0.968 +/- 0.006.
I don't know if the disagreement is critical to Crawford's theory. It looks like the Hubble constant derived from the Planck data assumes the lamda CDM standard model is true. Given this newer data I wonder what the Curvature Cosmology would derive the constant to be?

----------


## desiresjab

> I was wondering what Crawford claimed Hubble's constant was. At the end of the paper he has this (page 83, I reformatted the numeric values):
> 
> Results for the topics of the Hubble redshift, X-ray background radiation, the cosmic background radiation and dark matter show strong support for curvature cosmology. In particular CC predicts that the Hubble constant is 64.4 +/- 0.2kms^-1 Mpc^-1 whereas the value estimated from the type 1a supernova data is 63.8 +/-0.5 kms^-1 Mpc^-1 and the result from the Coma cluster (Section 5.15) is 65.7 kms^-1 Mpc^-1.
> The data from Planck (Feb 5, 2015) show the result as: http://arxiv.org/abs/1502.01589
> 
> These data are consistent with the six-parameter inflationary LCDM cosmology. From the Planck temperature and lensing data, for this cosmology we find a Hubble constant, H0= (67.8 +/- 0.9) km/s/Mpc, a matter density parameter Omega_m = 0.308 +/- 0.012 and a scalar spectral index with n_s = 0.968 +/- 0.006.
> I don't know if the disagreement is critical to Crawford's theory. It looks like the Hubble constant derived from the Planck data assumes the lamda CDM standard model is true. Given this newer data I wonder what the Curvature Cosmology would derive the constant to be?


Good pick up. In the first instance, the diagreement is about 1% either way. 1% is pretty monsterous. Trying to adjust for interstellar dust and gravitational lensing, all the while screening out background "noise" of various types, one wonders why there is not even more discrepancy between the systems. It is playing soccer where the goal is hidden.

Somehow one doubts that all figures for each system were derived independently. I think that is mentioned in the paper. Ahem, doesn't one use the rival's best measurements as fuel to prime their own engines?

It is a multi-dimensional, multi-player chess game of unknown infinitude or finitude. A clear winner is not in sight, I believe. All players already have strengths and weaknesses in their formations.

Right about here, Yessy boy, is a big gate where most mathematicians say, _wait a minute, I'm a poet._

Unless one goes in and crunches differential equations, goes in and manipulates in tensor calculus, goes in and calculates in multi-dimensional Lie matrices, all in a coordinated manner and to a purpose, with all constants and variables applied correctly, the force, angle or mass of every one of them understood in the overall context, one does not get much closer, but just crowds the gate, methinks. This requires a great deal of advanced physics, in addition to the advanced math.

I am happy to be a gate crowder. Like many, though, I still plot a way inside. I doubt I will ever get through that gate, but things nearly as strange have already happened in my intellectual life. At this gate are some very interesting discussions. This or that crowder might have enough information to quibble on arbitrary points, to the enlightenment of all. Many can ask interesting questions.

Spinoza .._thought determines action, desire determines thought, instinct determines desire...therefore there is no free will_, seems distant and quaint to us now, but worth reciting as one pole of the argument in time.

I get caught in recreations of depth. There are the recreations of depth I have gotten to and the ones I hope to get to.

One recreation is unsolved problems in number theory. It is one field of mathematics where everyone is allowed to play. Anything advanced you know is just gravy, because there is analytical number theory, too (meaning using calculus in addition to algebra).

I write articles of discovery to myself all the time, to keep this a little bit about writing. For instance, I have an original proof of Fermat's little theorem. Someone else probably proved this simple theorem before me in this way, but the point is I didn't know about it and was able to do it myself. I demanded absolute lucidity and got it, a visulization of irreproachable, irrefutable clarity.

I demand this same clarity of quadratic reciprocity in modular arithmetic, but have not yet acheived it. I am close. I have what I would call a good understanding. I know the theorem from diverse angles, can follow standard proofs. I have a multi-layered point of view. The visulaization is beginning to stir. Do I have such a visualization within me, or will it remain a shadowy thing that seems to stir on the ground and never stands?

It is my highest immediate goal in math, I frankly admit. Until I can see right through it the way I can see right through Fermat's little theorem in a visualization, I will never be through with it. It is as central to number theory as the Pythagorean theorem is to geometry and trig, but a lot more difficult. Even Euler was not up to hashing out all its difficulties, which is really saying something, since this guy in math is up there with Bach and Monet, if you will. Only the foremost of all mathematicians Gauss was able to bring this problem to rest, proving it eight different ways in his lifetime. Gauss made a little mini-career of crushing problems that had crushed the greatest mathematicians before him, sometimes for thousands of years. Gauss is what Ramanujan would have been, lucky enough to be born right in time and space. Those two were born to it, we know for certain. We know Mozart was born into music, and most likely for it, though one feels a mathematical rearing instead of a musical one in the case of mozart might have produced a superb mathematician rather than a superb musician and composer. He displayed exactly the same ability to "calculate" in his head, composing multiple pieces in various mediums before bothering to write them down.

Shakespeare is the tough one. Was the universally consensus greatest poet/dramatist of all time born to it? That does not mean he did not have to work ceaselessly at his art.

----------


## YesNo

About Spinoza's determinism it seems that instinct only provides constraints and dispositions rather than determining anything. For example, sexual desire is like a carrot disposing us to say yes to pleasure rather than forcing us to do so. 

Do you have a link to your proof of Fermat's little theorem? I would be interested in reading it.

There are people who can visualize numbers. I remember seeing someone who could recite pi to many decimal places by visualizing what the number should be. I will see if I can find that youtube video again. 

Edit: Danial Tammet comes to mind as a savant with abilities to calculate and visualize numbers. There is also Jason Padgett: http://www.livescience.com/45349-bra...th-genius.html

----------


## desiresjab

> About Spinoza's determinism it seems that instinct only provides constraints and dispositions rather than determining anything. For example, sexual desire is like a carrot disposing us to say yes to pleasure rather than forcing us to do so. 
> 
> Do you have a link to your proof of Fermat's little theorem? I would be interested in reading it.
> 
> There are people who can visualize numbers. I remember seeing someone who could recite pi to many decimal places by visualizing what the number should be. I will see if I can find that youtube video again. 
> 
> Edit: Danial Tammet comes to mind as a savant with abilities to calculate and visualize numbers. There is also Jason Padgett: http://www.livescience.com/45349-bra...th-genius.html


Tammet must have irregularities in his corpus colosum, which separates the brain's two hemispheres. Some people have an unnatural correspondence between the two hemispheres, synesthetes being notable.

Tammet is a savant, not a genius in the traditional sense. No one has figured out a way to make his astounding abilities work for humanity in a large way. If we ourselves were a bit smarter--say two hundred years smarter--I think it a fair assumption that Tammet would have much more to tell us. How do you talk to a dolphin, though? We are not yet smart enough to do that in their own language, either. Fran Peak, on the other hand, was a classic idiot savant. For all his abilities you could not get much out of him because he does not comprehend or apprehend the world in conventional terms whatsoever.

Do you know the brain preserves in an identifiable marking on the frontal lobe the learning of a stringed instrument early in life? Einstein had this marking from early violin lessons. It looks something like a horseshoe, and I myself will have it from learning guitar early. If you learned a stringed instrument early in life, you will have this marking!

Now get ready for something eerie. If you learned piano instead of a stringed instrument, the same marking is there all right, but on the other hemisphere. I do not know what happens if you learned both simultaneously, or what kind of marking there is for wind instruments or other groups. Piano is actually classified as a percussive instrument, I believe. Somehow I doubt a snare drummer has the same marking, but who knows?

Okay, a(p-1)=1 (mod p) is the standard form of Fermat's little theorem, and the equals sign is a congruence symbol, which is three parallel lines instead of two. Congruence means two numbers belong to the same congruence class. 

Roughly, you can say two numbers are congruent if they give the same remainder when divided by another particular number, usually called p because it is a prime. Instead of _mod_, think of the word _divisor_, for that is exactly what a modulus is.

A more beautiful and insightful form comes from the preceding step in the usual modern proof, and is ap=a (mod p). Take a picture of that. This is a very intriguing congruence. You can always think of what is in front of the mod symbol as the remainder in a division which has already taken place, or that is going to take place. In our case it was when ap was divided by p. This left a remainder of a, which is enough to hear heavenly choirs sing as one instinctively asks _why_? 

Now _a_ does not have to be prime, but for illustrative purposes, choosing from among the smallest primes has obvious advantages. The theorem says it is only true when p, the exponent, is a prime, though, and _a_ is not a multiple or a factor of _p_. This is called being _relatively_ prime. But two primes are always relatively prime to each other. Another reason to choose them.

To envision what ap=a (divisor p) means, lay down three tiles of length 5. Next to them, lay down tiles of length 3. Proceed until you have laid down six of the 3 length tiles. If you had stopped at five of the latter, the two strips of tiles would be of equal length, but as it now stands we have one tile of length 3 sticking out. Because we are dealing with two primes, we could have done our operations in either order. In other words, in our example either 3 or 5 can serve as the modulus (divisor), as you choose, and it does not matter which tiles we lay down first or think of as the divisor. We could have a 5 "sticking out" if we had gone the other way, is the only difference, and it makes no difference.

3·3·3·3·3/5 is the division we have going on, by the way.

The points along the number line which have 3 and 5 (a and p) as a common factor can be marked mentally. Why ap always leaves an _a_ sticking out (the remainder) when divided by p, is the question illustrated above with tiles, but not yet proven. To prove that ap always leaves an a sticking out, try:

Factoring 35-3, as a concrete example. The first one, 3(34-1), is easy enough. But as you continue to factor, fractions come into play. 3·3(33-1/3) (mod 5) , means the logic of the proof relies on modular inverses and a few other tricky concepts. I leave the final steps as an exercise.

(Hint): The object is to show that the expression 35-3, more generally known as ap-a, belongs to the zero class. No more is necessary.

P.S. Going with the visualization for a proof turns a simple proof into one more difficult, but we had to stay with the illustration because it makes the concept so clear.

P.S.S. I made a mistake with the factorization and corrected it. I marked that part in red. Formerly, I had 9 as the denominator of the fraction. That would occur on the next factorization. I have also included the mod operator there, to make things even more clear.

----------


## desiresjab

What does my last post have to do with cosmology? Well, plenty, perhaps.

Earlier in the discussion we sort of determined that as far as man's imagination goes, even God is limited in the kind of universe which that entity could create. Specifically, that entity could not create a universe where two is not the sucessor of one. If we settle for that, and I have, then just how far does that idea extend into mathematics? Does it mean God would also be incapable of creating a universe where Fermat's little theorem is not true?

Huge question. I don't know how to answer it.

----------


## desiresjab

It is an astounding thought. God could only make universes which obey our mathematics. God cannot make a physical universe which does not obey some mathematics, cannot make a universe where alternative algebraic structures are not possible, could not make a universe where any of the notions of our mathematics are false, other than twiddling with basic axioms as we ourselves have already done.

Well, someone must have a thought on that. A God constrained by mathematics. Actually, that is, constrained by the leap from mathematics to matter. Or is it just mathematics that constrains God? That one is tough. Help me out, somebody.

----------


## YesNo

> Do you know the brain preserves in an identifiable marking on the frontal lobe the learning of a stringed instrument early in life? Einstein had this marking from early violin lessons. It looks something like a horseshoe, and I myself will have it from learning guitar early. If you learned a stringed instrument early in life, you will have this marking!
> 
> Now get ready for something eerie. If you learned piano instead of a stringed instrument, the same marking is there all right, but on the other hemisphere. I do not know what happens if you learned both simultaneously, or what kind of marking there is for wind instruments or other groups. Piano is actually classified as a percussive instrument, I believe. Somehow I doubt a snare drummer has the same marking, but who knows?


I didn't know about these markings, but it supports the idea of neuroplasticity which is part of a recent kind of evidence of how the mind affects the brain rather than the other way around. 




> Okay, a(p-1)=1 (mod p) is the standard form of Fermat's little theorem, and the equals sign is a congruence symbol, which is three parallel lines instead of two. Congruence means two numbers belong to the same congruence class. 
> 
> Roughly, you can say two numbers are congruent if they give the same remainder when divided by another particular number, usually called p because it is a prime. Instead of _mod_, think of the word _divisor_, for that is exactly what a modulus is.
> 
> A more beautiful and insightful form comes from the preceding step in the usual modern proof, and is ap=a (mod p). Take a picture of that. This is a very intriguing congruence. You can always think of what is in front of the mod symbol as the remainder in a division which has already taken place, or that is going to take place. In our case it was when ap was divided by p. This left a remainder of a, which is enough to hear heavenly choirs sing as one instinctively asks _why_? 
> 
> Now _a_ does not have to be prime, but for illustrative purposes, choosing from among the smallest primes has obvious advantages. The theorem says it is only true when p, the exponent, is a prime, though, and _a_ is not a multiple or a factor of _p_. This is called being _relatively_ prime. But two primes are always relatively prime to each other. Another reason to choose them.
> 
> To envision what ap=a (divisor p) means, lay down three tiles of length 5. Next to them, lay down tiles of length 3. Proceed until you have laid down six of the 3 length tiles. If you had stopped at five of the latter, the two strips of tiles would be of equal length, but as it now stands we have one tile of length 3 sticking out. Because we are dealing with two primes, we could have done our operations in either order. In other words, in our example either 3 or 5 can serve as the modulus (divisor), as you choose, and it does not matter which tiles we lay down first or think of as the divisor. We could have a 5 "sticking out" if we had gone the other way, is the only difference, and it makes no difference.
> ...


Here is a link to a variety of proofs: https://en.wikipedia.org/wiki/Proofs...little_theorem

I am familiar with the ones for modular arithmetic and the proof using the binomial theorem. I was unaware of Golomb's combinatorial proof: http://www.cimat.mx/~mmoreno/teachin...Little_Thm.pdf. One thing Golomb asks which is important for these proofs to make sure they are correct is where do they use the hypothesis that p is prime since the result is not in general true for all integers. 

I am still trying to understand your proof about tiling as well as the dynamical system proof mentioned in the link of proofs above.

----------


## YesNo

> It is an astounding thought. God could only make universes which obey our mathematics. God cannot make a physical universe which does not obey some mathematics, cannot make a universe where alternative algebraic structures are not possible, could not make a universe where any of the notions of our mathematics are false, other than twiddling with basic axioms as we ourselves have already done.
> 
> Well, someone must have a thought on that. A God constrained by mathematics. Actually, that is, constrained by the leap from mathematics to matter. Or is it just mathematics that constrains God? That one is tough. Help me out, somebody.


I subscribe to Robert Prechter's Elliott Wave reports. This seems to me to be a similar view of how markets behave. They are not the result of rational activity on the part of market participants but rather "social mood" which is a sort of unconscious herding even when people are apparently making individual decisions to take on risk by buying equities and bonds. What causes social mood? It would be based on Fibonacci (mathematical) constraints on impulsive and corrective waves and not fundamental events.

I find this a little too deterministic at times and there must be multiple herds in place since for each buyer following some herd there is a seller following another herd and apparently Prechter thinks he can think outside this herding box. But it seems to work and I keep wondering what herd I am in. What I like about it is the idea that we are constrained by systems (or consciousness) above our own rather than by something unconscious below us. It is similar to Niles Eldredge's punctuated equilibria where the biological species are considered to be real and above our individual existences providing us with additional constraints such as pair bonding.

Unlike your perspective about God's constraints, these are systems providing dynamic constraints. I suspect God is also constrained as to how hydrogen behaves, but I don't know that it matters if consciousness is fundamental.

----------


## desiresjab

> I subscribe to Robert Prechter's Elliott Wave reports. This seems to me to be a similar view of how markets behave. They are not the result of rational activity on the part of market participants but rather "social mood" which is a sort of unconscious herding even when people are apparently making individual decisions to take on risk by buying equities and bonds. What causes social mood? It would be based on Fibonacci (mathematical) constraints on impulsive and corrective waves and not fundamental events.
> 
> I find this a little too deterministic at times and there must be multiple herds in place since for each buyer following some herd there is a seller following another herd and apparently Prechter thinks he can think outside this herding box. But it seems to work and I keep wondering what herd I am in. What I like about it is the idea that we are constrained by systems (or consciousness) above our own rather than by something unconscious below us. It is similar to Niles Eldredge's punctuated equilibria where the biological species are considered to be real and above our individual existences providing us with additional constraints such as pair bonding.
> 
> Unlike your perspective about God's constraints, these are systems providing dynamic constraints. I suspect God is also constrained as to how hydrogen behaves, but I don't know that it matters if consciousness is fundamental.


Fortunately, these thoughts are interesting in themselves, for I don't see how they connect with what I said about Godly constraints. No matter.

We might be able to apply a certain stimulus to a mosquito or a dolphin, and cause a certain behavior in them. We might make them herd, for instance. We said something, we just don't know exactly what it is, we only know it causes this behavior.

The people can be gathered by a signal to the town square. A variety of meanings could be attached to their coming there, and we do not know which is correct necessarily, just as in the case with animal communications. It takes an even more well designed experiment to know what we ourselves have said in their terms. At face value, we do not know if the people showed up in the square to pray, for a town meeting, to dance, for an emergency announcement or for something else. We gave the signal, but what does it mean to those who responded?

----------


## desiresjab

> I didn't know about these markings, but it supports the idea of neuroplasticity which is part of a recent kind of evidence of how the mind affects the brain rather than the other way around. 
> 
> 
> 
> Here is a link to a variety of proofs: https://en.wikipedia.org/wiki/Proofs...little_theorem
> 
> I am familiar with the ones for modular arithmetic and the proof using the binomial theorem. I was unaware of Golomb's combinatorial proof: http://www.cimat.mx/~mmoreno/teachin...Little_Thm.pdf. One thing Golomb asks which is important for these proofs to make sure they are correct is where do they use the hypothesis that p is prime since the result is not in general true for all integers. 
> 
> I am still trying to understand your proof about tiling as well as the dynamical system proof mentioned in the link of proofs above.


I have looked at all these proofs before, in keeping with my _many perspectives_ philosophy. The combinatorial proof is not bad, but the simplest proof is the first proof given by way of modular arithmetic. All you have to know beforehand is that when you multiply the elements of the set {1, 2, 3, 4,....n} by a constant _a_, the original set is merely reproduced in a different order by the multiplication. This is where you get to Wilson's theorem from, as well.

----------


## YesNo

> I have looked at all these proofs before, in keeping with my _many perspectives_ philosophy. The combinatorial proof is not bad, but the simplest proof is the first proof given by way of modular arithmetic. All you have to know beforehand is that when you multiply the elements of the set {1, 2, 3, 4,....n} by a constant _a_, the original set is merely reproduced in a different order by the multiplication. This is where you get to Wilson's theorem from, as well.


That different order is what leads me to have doubts about the proof although I know the result is correct. It does use the hypothesis that p is prime.




> Fortunately, these thoughts are interesting in themselves, for I don't see how they connect with what I said about Godly constraints. No matter.


What I thought was similar was that both of you use mathematics more than I would. I would like to see something more conscious involved.

----------


## desiresjab

> That different order is what leads me to have doubts about the proof although I know the result is correct. It does use the hypothesis that p is prime.


I lost a long reply because my computer froze up. Just as well. It was probably too pedantic and meandering.

The question is whether the set {0, 1, 2, 3, 4, 5, 6,...n} will actually reproduce itself in its entirety when each member is multiplied by the same constant. Remember that under a modulus numbers have nowhere to go when multiplied, except to one or another residue class of the reisdue system. They are trapped, they do nothing but cycle.

For modulus 7, 94827165103984648126484356, is in one or another of the 7 seven residue classes. That much is gauranteed, because all integers are. That huge number above can always be reduced to one of the reside classes and its most basic, i.e. smallest representative of the class.

{49, 1, 2, 38, 4, 705, 6}, is also a complete residue system of 7, because each class is represented once, even though some of the representatives are not fully reduced. That makes no difference. All numbers in a residue class are exactly equivalent, and may be substituted for one another at any point in calculations.

Below is the real key, the short expo.

If _x_ and _y_ are already congruent, then _ax_ and _ay_ will still be congruent, i.e. belong to the same residue class as each other after the multiplication, though it may now be a different class they are in together. This is one of the fundamental properties of conguences.

The property works in reverse, as well. The members of the set were all mutually incongruent to each other to begin with, because they belonged to different residue classes. Multiplied by the same constant _a_, they must all remain incongruent, as each cycles around the clock face according to the multiplier, to its eventaul slot.

Since they must all remain mutually incongruent after the multiplication, they are trapped again, the seven different products have no choice but to represent each residue class, lest two of them be congruent, which members of different residue classes cannot be, by definition.

I enjoyed that. I really had to think it through. I am lucky my computer crashed three times, as it turns out.

Everything that needs to be understood with regards to "a different order" is contained in the last three paragraphs of my last post.

----------


## YesNo

> I lost a long reply because my computer froze up. Just as well. It was probably too pedantic and meandering.
> 
> The question is whether the set {0, 1, 2, 3, 4, 5, 6,...n} will actually reproduce itself in its entirety when each member is multiplied by the same constant. Remember that under a modulus numbers have nowhere to go when multiplied, except to one or another residue class of the reisdue system. They are trapped, they do nothing but cycle.
> 
> For modulus 7, 94827165103984648126484356, is in one or another of the 7 seven residue classes. That much is gauranteed, because all integers are. That huge number above can always be reduced to one of the reside classes and its most basic, i.e. smallest representative of the class.
> 
> *{49, 2, 38, 4, 505, 6}*, is also a complete residue system of 7, because each class is represented once, even though some of the representatives are not fully reduced. That makes no difference. All numbers in a residue class are exactly equivalent, and may be substituted for one another at any point in calculations.


Shouldn't there be 7 elements in the set making the elements congruent to {0,1,2,3,4,5,6}?




> Below is the real key, the short expo.
> 
> If _x_ and _y_ are already congruent, then _ax_ and _ay_ will still be congruent, i.e. belong to the same residue class as each other after the multiplication, though it may now be a different class they are in together. This is one of the fundamental properties of conguences.


That makes sense because given a prime p and x = y mod p then given an integer a, ax = ay mod p. In this case a could equal 0.




> The property works in reverse, as well. The members of the set were all mutually incongruent to each other to begin with, because they belonged to different residue classes. Multiplied by the same constant _a_, they must all remain incongruent, as each cycles around the clock face according to the multiplier, to its eventaul slot.


Going in the other direction if x is not congruent to y mod a prime p then multiplying x and y by a = 0 would make them congruent. The proof in the link avoids a = 0 for ap-1 = 1 mod p by making sure 0 < a < p. However ap = a mod p works for a = 0 since one can factor out the a.

So my question would be given any prime p, how do we know there aren't other residues that act like 0 in the set of residues mod p besides 0? 

Now I know there is only one element that acts as a zero as well as only one element that acts as a unit (1), but I wonder if this requires some sort of proof or can it be assumed at this point?

Edit: This does seem to be where we need the hypothesis that p is a prime. If p were 4, then 2*2 = 0 mod 4. These residue classes mod a prime form finite fields: https://en.wikipedia.org/wiki/Finite_field




> Since they must all remain mutually incongruent after the multiplication, they are trapped again, the seven different products have no choice but to represent each residue class, lest two of them be congruent, which members of different residue classes cannot be, by definition.

----------


## desiresjab

You are right. I did miss an element in a set I listed. Sorry about that. I went back and corrected it.

The zero class of residues is generally left out of many procedures in the business, I believe, because it has no inverse and some other reasons. I did not leave the zero class out, though, because 49≡0 (mod 7). I left out one of the other classes by oversight.

I may have made other mistakes.

The fundamental property that if _a_≡_b_ (mod p), then _ax_≡_bx_ (mod p), has an analogue with addition, for it is also fundamental that if a≡b (mod p), then a+x≡b+x (mod p).

It is also true with exponents. If a≡b (mod p), then _an_≡_bn_ (mod p)

Being able to accept with clarity just these three properties, is essential. They are powerful tools and lead many places.

I have satisfied myself with respect to Fermat's little theorem. I feel I have seen to the bottom of the well on that one. It has been reduced to a compact visualization. 

Seeing to the bottom of the well on quadratic reciprocity may not be possible for me. The kind of visualization I seek may not be a realistic possibility. I think such understanding may involve seeing to the bottom of the well on Eisenstein's geometric proof, for he has probably already reduced it to its simplest representation. The innocent eye will not even detect a relation between Eisenstein's lattice points in a rectangular array and the law of QR, that is how far he has gone. To understand his proof, an understanding of other proofs and a familiarity with their notations is essential. I am pretty sure you have to have this intimate familiarity to see to the bottom of the well, even with Eisenstein's deceptively simple proof. When I can do that, or have devised my own representation, only then may I be able to say I have seen to the bottom of the well with respect to QR. That would be a nice feeling to experience, and I wonder if I am going to have it.

Think how powerful the mind of Artin had to be, to finally conquer general reciprocity, when even little ol' quadratic reciprocity is so tangled and tough. How monumental was that task?

----------


## YesNo

> I have satisfied myself with respect to Fermat's little theorem. I feel I have seen to the bottom of the well on that one. It has been reduced to a compact visualization. 
> 
> Seeing to the bottom of the well on quadratic reciprocity may not be possible for me. The kind of visualization I seek may not be a realistic possibility. I think such understanding may involve seeing to the bottom of the well on Eisenstein's geometric proof, for he has probably already reduced it to its simplest representation. The innocent eye will not even detect a relation between Eisenstein's lattice points in a rectangular array and the law of QR, that is how far he has gone. To understand his proof, an understanding of other proofs and a familiarity with their notations is essential. I am pretty sure you have to have this intimate familiarity to see to the bottom of the well, even with Eisenstein's deceptively simple proof. When I can do that, or have devised my own representation, only then may I be able to say I have seen to the bottom of the well with respect to QR. That would be a nice feeling to experience, and I wonder if I am going to have it.


Don't give up hope. However, if you wanted to see to the bottom of the well of Joyce's "Finnegans Wake", I would recommend despair. The bottom may be a lot shallower than quadratic reciprocity.

I checked this on quadratic reciprocity: https://en.wikipedia.org/wiki/Quadratic_reciprocity

It looks like it starts with Fermat's little theorem relating ap-1 = 1 mod p and then asks what one can say about a(p-1)/2 = +/- 1 mod p. I don't understand it. Nor do I understand Artin's generalization, but perhaps we can try to clarify that for each other.

----------


## desiresjab

> Don't give up hope. However, if you wanted to see to the bottom of the well of Joyce's "Finnegans Wake", I would recommend despair. The bottom may be a lot shallower than quadratic reciprocity.
> 
> I checked this on quadratic reciprocity: https://en.wikipedia.org/wiki/Quadratic_reciprocity
> 
> It looks like it starts with Fermat's little theorem relating ap-1 = 1 mod p and then asks what one can say about a(p-1)/2 = +/- 1 mod p. I don't understand it. Nor do I understand Artin's generalization, but perhaps we can try to clarify that for each other.


The theory of primitive roots is another interesting study in number theory. Fermat's little theorem does not say whether a(p-1) is the first power that equals one under the modulus. There could have been earlier powers that are equal to 1. Primitive roots are equal to 1 for the first time at (p-1). So primitve roots are special and a whole theory is built around them. 

I cannot even see beneath the water in Finnegan's Wake. Two or three hundred years from now people will still feel the same about that book. Every human being on earth with a clear understanding of Relativty would be more likely than everyone with an understanding of Finnegan's Wake. Now that book is opaque.

The first to fully understand that book will probably be a meatie with integrated implants.

You are right about the multiplication by zero. My own proof relied on a factorization which allowed me to show that some of the factors belonged to the zero class of residues (remainders).

General reciprocity means all of them, the cubic, the quatric, the quintric... What are the laws of general reciprocity, not just quadratic, you see? Artin manged to untangle that. He is one of the great math men of all time that you never hear about. General reciprocity was a problem from Hilbert's original famous list. Finding the solution of any one of those problems guarantees one immortality. Many of those problems have now been solved. In one article Artin is referred to as _the_ preemminent algebraist of the 20th century.

There are many angles to view QR from. They are all correct but they all illustrate different aspects of it. It has many different equivalent statements.

Basically, it compares two primes to find out if either is in the other's quadratic residue set. I know that is a mouthful. Let us compare 3 and 5, for the sake of simplicity. Are there any numbers in the baisc residue set of 3 which when squared are equal to 5 (mod 3)? But 5 is equal to 2 (mod 3). Are there any numbers under three which equal 2 when squared, then? There are only 1 squared and 2 squared, which both equal 1 (mod 3).

Now we ask the reverse question--can the number 3 be found when the numbers less than 5 are squared (mod 5)? Let us look. 12=1, 22=4, 32=4, 42=1 all mod (5).

These numbers (3 and 5) are not quadratic residues of each other, since neither can be found in the other's quadratic residue set. This gives them, when multipled together, as in the Legendre symbol, a value of 1, because (-1)(-1)=1.

Another way of stating the general law is that if either of the primes being compared is a 4n+1 type prime, then both primes are either in the other's set, or both are not.

In one species of case, where we have two primes of 4n+3 variety, one will be in the other's quadratic residue set, and the other will not be in the other's. In this case we have a kind of quadratic irreciprocty, as I like to call it, and the value of the Legendre will be -1, since (-1)(1)=-1. Only in the case of two 4n+3 primes will the Legendre symbol ever equal -1.

two 4n+3 types=-1
one of each type=1
two 4n+1 types=1

When we compared 3 and 5, we had one of each type. We only need to find one value, in this case, because the other is guaranteed to have the same "character" as its companion when the two primes are of different types. One calculation is always easier than the opposite way. The easy calculation always implies the answer to the other prime.

The same reasoning applies when comparing two 4n+3 type primes. Do the easy calculation, and the other value is automatically known to be of opposite "character" to that one.

There is a strong concept of periods involved in reciprocity, which I do not have a full grip on yet. Once, I thought I had it rassled down and pinned, but my hold was illegal. Modulus rings are all about periods. They have _torsion_, which means multiplying by a larger number can make them smaller sometimes. Normal arithmetic is not entirely applicable in modulus rings, obviously. A firmer understanding of which periods affect reciprocity and how, would clean things up a bit for myself, methinks. Actually, QR has me tired for the moment, but I will cycle back in a few days refreshed. Repeated seiges must win a war of attrition.

The fact that prolonged seiges are necessary, means I am dumb, neither a first class nor a second class mathematician, when one considers, my God, that Gauss gave as criterion for a first class mathematician an immediate understanding of Euler's formula eiπ+1=0, where that exponent that comes out looking like a weird M is actually the Greek letter pi, and i is the imaginary number the square root of -1. Also, _e_ is the constant found universally in nature. Mathematically,_ e_ is a function which is its own derivative and integral, which makes it really cool, and it is also a transcendental number!

There is more to the complexity of QR, though the 4n+1 and 4n+3 rules stand fast through all.

----------


## YesNo

> The theory of primitive roots is another interesting study in number theory. Fermat's little theorem does not say whether a(p-1) is the first power that equals one under the modulus. There could have been earlier powers that are equal to 1. Primitive roots are equal to 1 for the first time at (p-1). So primitve roots are special and a whole theory is built around them.


I noticed there is a conjecture Artin made about primitive roots: https://en.wikipedia.org/wiki/Artin%...rimitive_roots

Maybe we can try to prove it for a = 3. I hear it hasn't been shown even for one value.




> I cannot even see beneath the water in Finnegan's Wake. Two or three hundred years from now people will still feel the same about that book. Every human being on earth with a clear understanding of Relativty would be more likely than everyone with an understanding of Finnegan's Wake. Now that book is opaque.
> 
> The first to fully understand that book will probably be a meatie with integrated implants.


Or someone with a computer with nothing better to do.

Here's a site with a lot of integer sequences: https://oeis.org/A005596




> General reciprocity means all of them, the cubic, the quatric, the quintric... What are the laws of general reciprocity, not just quadratic, you see? Artin manged to untangle that. He is one of the great math men of all time that you never hear about. General reciprocity was a problem from Hilbert's original famous list. Finding the solution of any one of those problems guarantees one immortality. Many of those problems have now been solved. In one article Artin is referred to as _the_ preemminent algebraist of the 20th century.
> 
> There are many angles to view QR from. They are all correct but they all illustrate different aspects of it. It has many different equivalent statements.
> 
> Basically, it compares two primes to find out if either is in the other's quadratic residue set. I know that is a mouthful. Let us compare 3 and 5, for the sake of simplicity. Are there any numbers in the baisc residue set of 3 which when squared are equal to 5 (mod 3)? But 5 is equal to 2 (mod 3). Are there any numbers under three which equal 2 when squared, then? There are only 1 squared and 2 squared, which both equal 1 (mod 3).
> 
> Now we ask the reverse question--can the number 3 be found when the numbers less than 5 are squared (mod 5)? Let us look. 12=1, 22=4, 32=4, 42=1 all mod (5).
> 
> These numbers (3 and 5) are not quadratic residues of each other, since neither can be found in the other's quadratic residue set. This gives them, when multipled together, as in the Legendre symbol, a value of 1, because (-1)(-1)=1.
> ...


So why do people care about these reciprocity relationships?

Regarding eiπ+1=0 wouldn't this be just (-1,0) on the unit circle? With eix = cos x + i sin x? https://en.wikipedia.org/wiki/Euler%27s_formula

----------


## desiresjab

> I noticed there is a conjecture Artin made about primitive roots: https://en.wikipedia.org/wiki/Artin%...rimitive_roots
> 
> Maybe we can try to prove it for a = 3. I hear it hasn't been shown even for one value.
> 
> So why do people care about these reciprocity relationships?
> 
> Regarding eiπ+1=0 wouldn't this be just (-1,0) on the unit circle? With eix = cos x + i sin x? https://en.wikipedia.org/wiki/Euler%27s_formula


I have not read about Artin's conjecture yet. I do not think I want to try solving anything even for 3 that the greats have failed to answer. One such problem on a man's calendar is quite enough, and I already have such a problem. It is called Brocard's problem, and I have been working on it for years. Investigating it has caused me to study in detail the classical elements of number theory, so the project, though hopeless, has not been fruitless. 

As for the Euler equation, that is what it represents all right. If you saw that immediately, and then exactly what the trig function means, you will be a first rate mathematician, son. Congratulations.

It is now evident I am a thorough amateur. By thorough I must mean _strictly_. But I am an assiduous one, perhaps too dumb to relent.

Now I need a new project all right. Seeing to the bottom of the well on something as complex as QR requires some projects in between, otherwise you are just stuck in one place and are not learning anything. My method has been to study those areas I think might yield fruit on my Brocard project. Since I can never reasonably hope to solve Brocard's problem, what I can get out of the pursuit is whatever I can pick up that might relate to understanding it better. This leads a man far afield into pleasurable pursuits of learning, and at least partially justifies the obsession with an unsolved problem which some of the ATG's of mathematics have looked at without success.

Next I will take a look at your Artin link. I am well travelled on the other site. Sequences and series is one of my favorite aspects of math. The historical importance of series cannot be over valued. Studying series is one of the most fun things a human can do, at least this silly human finds it exhilerating.

Now I must hunt for something which I do not understand and which looks aesthetically appealing and relevant. In between, I write poetry, stories and novels, just like the other folks on the forum do. I seldom try to get anything published because I am too busy sorting everything at once. Dang it! Math is my hobby. A slow individual has to think a long time on these difficult matters to get them even semi-sorted out. The payoff is in ecstasy, though, man. I don't know why that is. I write better than I cipher, but ciphering will just not go away. I like performing certain acts, such as writing novels and going to the well with equations. Promoting them is boring as hell.

You could write something you knew was world class and have no success at all convincing editors and publishers of this. But if one ever did crack an unsolved problem, no editor or publisher could deny the acheivement with the flick of his wrist toward the waste basket.

That is one great difference: Initially cracking through to the world of literature depends soley upon the opinions of a few important people, whereas cracking the world of math depends soley upon fact which others may not even dispute. In math you cannot be shut out. Even if you are killed tomorrow like Galois, your acheivement lives on as long as your proof was written down. How many great pieces of literature were thrown irretrievably into the dust bin of time, ignored and lost? More than a few, I personally suspect.

----------


## desiresjab

I looked at the Artin conjecture. A couple of things to notice:

_2. Under the conditions that a is not a perfect power and that a0 is not congruent to 1 modulo 4, this density is independent of a and equals Artin's constant which can be expressed as an infinite product..._

Is the same thing as saying that _a_ must be a 4n+3 type prime (with the apparent exception of 2, of course), a subject we just discussed. It is surprisng how much this idea crops up in high powered research. This may be an instance of the wide ranging influence of QR in other areas. You just asked why mathematicans were so concerned with QR. QR is centrally placed in number theory, just like the Pythagorean theorem is to normal algebra, geometry and trig--really important! It touches almost everything, but its hand is often concealed.

The other thing to notice in the article is the importance again of the Riemann hypothesis to eventual solutions. The Riemann conjecture has to be by far the most important unsolved problem in all of mathematics. If it fell, many famous unsolved problems would fall right behind it, for all they need to be complete is that the Riemann hypotheis be true.

Fermat's last theorem was the most famous mathematics problem ever, probably, and solving it justifiably grants Wiles immortality, but a host of other important solutions did not fall right behind it, as they will when the Riemann hypothesis is proven.

----------


## desiresjab

I don't want to hog the airwaves here, but I wanted to try and answer Yes/No's question as to the importance of QR. Besides the generalized answer that it is centrally located in number theory, I should state that it relates directly to the solution of quadratic equations in the algebraic structure of the ring of integers under a modulus.

When, why and why not, do quadratic equations have solutions in this algebraic structure, and what are those solutions?

Whether x2≡7 (mod 11), for instance, is souluble depends on QR in this algebra. This child of Gauss is as close as one can get to something like the quadratic formula of normal algebra.

Quadratic research is an ongoing thing. Many a Phd dissertation covers some aspect of it.

Quadratic equations are one of those mathematical objects we can clutch and cling to as sure things--we can get solutions. No wonder they are so important in the history of math and retain their importance. Galois told us two centuries ago there were no sure things beyond degree four in equations, no general method to extract solutions to equations of degree five and higher. We got what we got. 

Whether God could have built a universe where general methods for equations of degree five and beyond are as plain and simple as quadratic issues--even that is an open question.

----------


## YesNo

> I have not read about Artin's conjecture yet. I do not think I want to try solving anything even for 3 that the greats have failed to answer. One such problem on a man's calendar is quite enough, and I already have such a problem. It is called Brocard's problem, and I have been working on it for years. Investigating it has caused me to study in detail the classical elements of number theory, so the project, though hopeless, has not been fruitless.


It's not hopeless. Understanding Finnegans Wake is hopeless. 

You might also try hypnosis. 

I haven't heard of Brocard's problem: https://en.wikipedia.org/wiki/Brocard%27s_problem I assume the challenge is to find values of n such that n! + 1 is a perfect square. I see there are only 3 solutions. Is the goal to find a fourth or prove that they have all been found?




> As for the Euler equation, that is what it represents all right. If you saw that immediately, and then exactly what the trig function means, you will be a first rate mathematician, son. Congratulations.
> 
> It is now evident I am a thorough amateur. By thorough I must mean _strictly_. But I am an assiduous one, perhaps too dumb to relent.


All I know is what I remember from first year calculus. What these things mean over the complex numbers is not something I can visualize.




> Now I need a new project all right. Seeing to the bottom of the well on something as complex as QR requires some projects in between, otherwise you are just stuck in one place and are not learning anything. My method has been to study those areas I think might yield fruit on my Brocard project. Since I can never reasonably hope to solve Brocard's problem, what I can get out of the pursuit is whatever I can pick up that might relate to understanding it better. This leads a man far afield into pleasurable pursuits of learning, and at least partially justifies the obsession with an unsolved problem which some of the ATG's of mathematics have looked at without success.
> 
> Next I will take a look at your Artin link. I am well travelled on the other site. Sequences and series is one of my favorite aspects of math. The historical importance of series cannot be over valued. Studying series is one of the most fun things a human can do, at least this silly human finds it exhilerating.
> 
> Now I must hunt for something which I do not understand and which looks aesthetically appealing and relevant. In between, I write poetry, stories and novels, just like the other folks on the forum do. I seldom try to get anything published because I am too busy sorting everything at once. Dang it! Math is my hobby. A slow individual has to think a long time on these difficult matters to get them even semi-sorted out. The payoff is in ecstasy, though, man. I don't know why that is. I write better than I cipher, but ciphering will just not go away. I like performing certain acts, such as writing novels and going to the well with equations. Promoting them is boring as hell.


We are similar. You should try the poetry contests. You can always put what you post here in your blog or a book of poems later if you want.




> You could write something you knew was world class and have no success at all convincing editors and publishers of this. But if one ever did crack an unsolved problem, no editor or publisher could deny the acheivement with the flick of his wrist toward the waste basket.
> 
> That is one great difference: Initially cracking through to the world of literature depends soley upon the opinions of a few important people, whereas cracking the world of math depends soley upon fact which others may not even dispute. In math you cannot be shut out. Even if you are killed tomorrow like Galois, your acheivement lives on as long as your proof was written down. How many great pieces of literature were thrown irretrievably into the dust bin of time, ignored and lost? More than a few, I personally suspect.


You still need someone to recognize that your proof was correct. Or someone else will have to re-create it.

----------


## YesNo

> I looked at the Artin conjecture. A couple of things to notice:
> 
> _2. Under the conditions that a is not a perfect power and that a0 is not congruent to 1 modulo 4, this density is independent of a and equals Artin's constant which can be expressed as an infinite product..._
> 
> Is the same thing as saying that _a_ must be a 4n+3 type prime (with the apparent exception of 2, of course), a subject we just discussed. It is surprisng how much this idea crops up in high powered research. This may be an instance of the wide ranging influence of QR in other areas. You just asked why mathematicans were so concerned with QR. QR is centrally placed in number theory, just like the Pythagorean theorem is to normal algebra, geometry and trig--really important! It touches almost everything, but its hand is often concealed.
> 
> The other thing to notice in the article is the importance again of the Riemann hypothesis to eventual solutions. The Riemann conjecture has to be by far the most important unsolved problem in all of mathematics. If it fell, many famous unsolved problems would fall right behind it, for all they need to be complete is that the Riemann hypotheis be true.
> 
> Fermat's last theorem was the most famous mathematics problem ever, probably, and solving it justifiably grants Wiles immortality, but a host of other important solutions did not fall right behind it, as they will when the Riemann hypothesis is proven.


It seems like the conjecture has been almost completely proved except for identifying the one or two exceptions that do not work. They would be either 3, 5, or 7. 

It seems that -1 would not work since it could be the primitive root for only Z3x since it flips from -1 to 1 and back again giving only two distinct units. Also squares would not work since the most they could generate are half of the units and their square root would be the primitive root. So I can see why -1 and the squares are excluded. One already knows there can be only finitely many primes, if any, having them as a primitive root.

So, to proceed we would have to get the proofs by Roger Heath-Brown and R. Gupta and M. Ram Murty.

Then the challenge would be to actually construct the S(a) sets.

Edit: After looking at some of the other papers besides the Wikipedia one there might be more than a couple numbers which do not follow the conjecture. I think all that has been shown are that infinitely many numbers do, but which ones do not is not known. It is possible that the set of numbers that do not have infinitely many primes for which they are primitive roots is the set eliminated in Artin's hypothesis namely -1 or squares. 

Of course, I might be totally confused about all of this. I am still putting the pieces of this jigsaw puzzle on the table.

----------


## desiresjab

> It seems like the conjecture has been almost completely proved except for identifying the one or two exceptions that do not work. They would be either 3, 5, or 7. 
> 
> It seems that -1 would not work since it could be the primitive root for only Z3x since it flips from -1 to 1 and back again giving only two distinct units. Also squares would not work since the most they could generate are half of the units and their square root would be the primitive root. So I can see why -1 and the squares are excluded. One already knows there can be only finitely many primes, if any, having them as a primitive root.
> 
> So, to proceed we would have to get the proofs by Roger Heath-Brown and R. Gupta and M. Ram Murty.
> 
> Then the challenge would be to actually construct the S(a) sets.
> 
> Edit: After looking at some of the other papers besides the Wikipedia one there might be more than a couple numbers which do not follow the conjecture. I think all that has been shown are that infinitely many numbers do, but which ones do not is not known. It is possible that the set of numbers that do not have infinitely many primes for which they are primitive roots is the set eliminated in Artin's hypothesis namely -1 or squares. 
> ...


Something to stay cognizant of is that (p-1) and -1 are the same thing, they represent the same class, therefore are identical. I can only delve deeply into problems that attract me strongly. Many problems are quite interesting. But a problem has to have a certain form, a certain look before I devote myself to it. For one thing, I would not look seriously at general reciprocity until I was thoroughly comfortable with QR. I am better off plugging through number theory textbooks, I believe, than taking on multiple unsolved problems. But all unsloved problems are of general interest to me.

And by the way, Merry Christmas to you and everyone on the forum. Simulations can be merry, can't we?

----------


## YesNo

Merry Christmas! Even if you aren't a simulation!

----------


## desiresjab

Gauss believed in the afterlife because he considered it wasteful for there not to be one. People come up with every kind of rationalization. For that belief you would have to believe in a God to begin with, because there is no injunction against nature for being wasteful. Indifferent nature has no motive. One could say God was merely a personification of indifferent nature by humans in an attempt to give it some human qualities of mercy and justice for those we love, and the power to destroy those we do not love. The destruction of one's enemies has been an important role for God throughout history. It made him a star.

I like Christmas. I even love Christmas. Jesus has a lot more to recommend him than Mohammed. Our modern vision of Christmas was created by Johnny Marks and Montgomery Wards, but still I like it. It is a beautiful fairytale. The core of it could even be true.

From one point of view modern Christmas is an ugly capitalist scheme to get people into stores. A pretty package for greed. On the other hand it is a beautiful tradition filled with merriment and good cheer.

I feel sorry for kids today that they cannot experience Christmas the way kids fifty years ago did. Most of the charm is now gone from the tradition. I see kids, I am with them, I know it is not the same for them now. Parents and grandparents try to keep the tradition alive as they knew it, but it is a losing battle against multi-cultural political correctness instituted by an army of diverse activists whose ridiculous college educations prepared them for nothing else. They majored in baloney like "public policy" and "gender studies," then they were loosed upon society. Since they now know nothing useful, they become activists, they start a board, a foundation, an institute. The world, especially our country, is so filled with useless activists I am confident I could find someone advocating for people with two rectums, if I tried hard enough. The toilets we use show how little we care after all.

* * * * *

The rest of this post was so controversial I decided not to publish it during Christmas season. There is still enough to gnaw on.

----------


## YesNo

It occurred to me today that the idea of our being simulations has some similarities to my own idealist viewpoint. Consciousness is behind both of them.

----------


## desiresjab

> It occurred to me today that the idea of our being simulations has some similarities to my own idealist viewpoint. Consciousness is behind both of them.


If we are simulations, what the heck are we simulations of? Maybe we are simulations of beings with free will. If we are simulations, why were we only given sensitivity to small bands of light and sound? Our makers considered that enough-- but for what? And just where are these makers carting our dead off to? No one makes something they do not use in some way, even if it is only art to view. We could be the makers' art form.

Be assured of one thing, God is not going to answer any questions. We will get the answers for ourselves, or not at all. That is what we are up to and have been up to.

Why all the injunctions against "earthly" knowledge, though? What did God have against us wising up? Those parts of the Bible sound very humanly inspired to me, as in keeping your subjects kowtowing to the emperor, sultan, king.

I have to say that I believe 0% of the Bible and Koran were divinely inspired. Whatever God did was done with the creation of a universe with infinitely unfolding emergent properties. Want hints about God?--study the universe. Mathematicans and physicists--all sciences--try to study its architecture.

----------


## YesNo

I am reminded of a glass being half full or half empty. It is a matter of perspective. When you say that the Bible or the Koran are 0% divinely inspired in contrast I view all texts as being divinely inspired. That would include the Bible and the Koran as well as our posts on this thread.

Simulations with free will are even closer to my view of what we are than deterministic simulations. Underlying the simulation idea is some consciousness creating and maintaining the simulation. 

When you talk about "earthly knowledge", what are you referring to? Is there some Bible or Koran verse you are referring to? I have read only about 10% of these texts. If that. I am not familiar with them.

----------


## desiresjab

> I am reminded of a glass being half full or half empty. It is a matter of perspective. When you say that the Bible or the Koran are 0% divinely inspired in contrast I view all texts as being divinely inspired. That would include the Bible and the Koran as well as our posts on this thread.
> 
> Simulations with free will are even closer to my view of what we are than deterministic simulations. Underlying the simulation idea is some consciousness creating and maintaining the simulation. 
> 
> When you talk about "earthly knowledge", what are you referring to? Is there some Bible or Koran verse you are referring to? I have read only about 10% of these texts. If that. I am not familiar with them.


By your definition _Mein Kamph_ was divinely inspired.

1 Corinthians 3:19

_For the wisdom of this world is folly with God. For it is written, "He catches the wise in their craftiness."
_
1 Corinthians 19-20

_For it is written, "I will destroy the wisdom of the wise, and the discernment of the discerning I will thwart." Where is the one who is wise? Where is the scribe? Where is the debater of this age? Has not God made foolish the wisdom of the world?
_
James 3:15

_This is not the wisdom that comes down from above, but is earthly, unspiritual, demonic.
_
Colossians 3:2

_Set minds on things that are above, not on things that are on earth.
_
Isaiah 44;25

_Who frustrates the signs of liars and makes fools of diviners, who turns wise men back and makes their knowledge foolish?
_

Over and over in the Bible, thousands of times, the message is the same, one of pure control: Seek no wisdom but that found in God. Any earthly knowlege is evil and demonic. 

Remember, the mighty Koran is only the size of a first book of poetry published by a typical independent press. There are not as many injunctions, but they are there. You will find them.

----------


## YesNo

> By your definition _Mein Kamph_ was divinely inspired.


As well as the anti-Mein Kampf texts. Someone conscious wrote them and got inspiration from somewhere. Being inspired doesn't mean the texts are infallible.




> 1 Corinthians 3:19
> 
> _For the wisdom of this world is folly with God. For it is written, "He catches the wise in their craftiness."
> _


That sort of makes sense actually. A lot of people who think they are wise are not.




> 1 Corinthians 19-20
> 
> _For it is written, "I will destroy the wisdom of the wise, and the discernment of the discerning I will thwart." Where is the one who is wise? Where is the scribe? Where is the debater of this age? Has not God made foolish the wisdom of the world?
> _


That one also makes sense. Do we really think we know what is going on?




> James 3:15
> 
> _This is not the wisdom that comes down from above, but is earthly, unspiritual, demonic.
> _


I assume this is referring to superior wisdom. Some of us are brighter than others. God should be brighter than all the rest. It makes sense to focus on what is best.




> Colossians 3:2
> 
> _Set minds on things that are above, not on things that are on earth.
> _


Superior wisdom again.




> Isaiah 44;25
> 
> _Who frustrates the signs of liars and makes fools of diviners, who turns wise men back and makes their knowledge foolish?
> _


I could come up with a similar justification for that.




> Over and over in the Bible, thousands of times, the message is the same, one of pure control: Seek no wisdom but that found in God. Any earthly knowlege is evil and demonic. 
> 
> Remember, the mighty Koran is only the size of a first book of poetry published by a typical independent press. There are not as many injunctions, but they are there. You will find them.


I think the point of these passages is to avoid delusion and focus on what is true.

----------


## desiresjab

The language in all so-called wisdom literature is general enough to take in a wide sweep. I could find _wise_ passages in Nostradamus or just about anyone else.

The point of controlling knowledge and the very definition of what knowledge is, is to control society, and the passages I quoted are examples of religion in action doing just that. People still use such Biblical injunctions to perpetuate all sorts of nonsensical beliefs not fit for modern minds.

You may interpret these texts with a liberal modern hand and think how beautiful they are, just as believers do, picking and choosing what you like and sweeping under the rug what you do not care for, but the intent of the authors had nothing to do with symbolism. _You pay loyalty only to God through the temple_, that was the big message. In the meantime jews are omnipresent in western academia, so they are not taking it too seriously. They are gathering up earthly knowledge. _A hoard heaped by the roadside..._ [Joyce].

----------


## YesNo

Generally it is hard to find someone who is completely wrong. There is wisdom all over the place. Some is just harder to find.

----------


## desiresjab

> Generally it is hard to find someone who is completely wrong. There is wisdom all over the place. Some is just harder to find.


I am peering at every detail of Eisenstein's proof of QR, and making headway. Every detail must be accounted for and understood. More pieces are falling into place every time I focus.

My bet has changed. I now beleive I will see to the bottom of the well on QR from the Eisenstein perspective. After that, I would like to add some other perspectives, such as combinatorial and group theory approaches. But Eisenstein is not fully transparent yet.

----------


## desiresjab

Tiny snags.

----------


## YesNo

You should be able to see to the bottom of the well.

I have been using Google Sheets to gather data about primitive roots as well as checking some papers online to get a feel for the problem. Since I need to show that 2 is a primitive root for infinitely many primes, I will have to find some way to use the information about a finite number of primes (assume all of them) for which 2 is a primitive root and reason from there that there must be another one.

----------


## desiresjab

> You should be able to see to the bottom of the well.
> 
> I have been using Google Sheets to gather data about primitive roots as well as checking some papers online to get a feel for the problem. Since I need to show that 2 is a primitive root for infinitely many primes, I will have to find some way to use the information about a finite number of primes (assume all of them) for which 2 is a primitive root and reason from there that there must be another one.


I see what you want--something that proceeds along similar lines to Euclid's proof of the infinitude of primes.

Is this a proven proposition--that 2 is a primitive root for infinitely many primes--or someone's unproven conjecture?

Was this something to do with Artin's conjecture? How quickly I forget except what is in my tunnel vision.

----------


## desiresjab

Okay, now I have refreshed. That type of simple appearing yet intractable problem is typical of number theory. One sometimes is astonished that certain propositions which are so simple go unproven for so long. No doubt, many a doctoral dissertation has beat its head against your particular problem, which is indeed Artin's conjecture.

----------


## Dreamwoven

Good luck to both of you in your search.

----------


## YesNo

Thanks, Dreamwoven!

Yes, it is Artin's conjecture, or part of it. A lot of number theory books are available as pdfs from the internet. There are more than I have time to read. Luckily for me I don't have to read all of them since they repeat themselves, just understand a few.

----------


## desiresjab

Eisenstein has done something extraordinary. His proof actually has nothing to do with QR, other than a method for building the right exponent to go on -1, though it does involve the primes p and q. It looks at the ratios of p and q when the prime in the numerator is multiplied by successive even numbers under the modulus of the other, with a chop function appended. In other words a function that always rounds down instead of moving to the nearest value. It would have been sufficient to find the correct parity under any circumstance, but Eisenstein is more exact than that, producing the exact exponent on -1 as the number of lattice points in the prescibed regions of his p by q rectangle. Only the (p-1) by (q-1) part interests him, containing the interior lattice points of the larger rectangle, and then only those with even coordinates.

This proof is wonderfully clever. The downside is that it will not reveal any deeper properties of numbers that help elucidate why QR works. Deeper investigations might uncover why it works. It proves what it proves--that Eisenstein's method will always find the right exponent for -1 in the Legendre symbol.

----------


## YesNo

Is the computation done in polynomial time in terms of the number of digits of p and q or something slower?

----------


## desiresjab

> Is the computation done in polynomial time in terms of the number of digits of p and q or something slower?


Uhhhh....I am not sure I understand the question.

Finding a quadratic residue of a given number no matter how large, should be a P time exercise. You may be referring to something like RSA encryption or Diffe-Hellman key exchange. Various encryption systems are based on particular aspects of number theory laws. It can be quadratic residues, primitive roots or mod inverses they use to "conceal" the message.

In reverse the problem gets nasty real fast, as in NP nasty. It is easy to give some quadratic residues of a number q. But given a quadratic residue, it is impossible to find q in ploynomial time, if the numbers involved are long enough.

Currently, it takes numbers of about four thousand digits length to get your bank account information encrypted securely. If some country or individual had the power to quantum compute, it has already been mathematically proven they could break our present codes in seconds.

----------


## YesNo

That answered the question. I was wondering if there were a computation problem still unsolved. I suppose a quantum computer runs faster because of a potential parallel processing involved. Wouldn't a network of computers working in parallel be able to simulate such a computer?

----------


## desiresjab

> That answered the question. I was wondering if there were a computation problem still unsolved. I suppose a quantum computer runs faster because of a potential parallel processing involved. Wouldn't a network of computers working in parallel be able to simulate such a computer?


That is the same question I have had. I have a hunch the answer would be "yes," provided that we link enough silicon computers together to fill the solar system, or some other great volume of space.

I went to town today and forgot graphing paper. If I graph out some p's and q's Eisenstein style, another key to QR may pop out visibly or algebraically.

I have some other studies I am stalling right now because I cannot let go of QR when I am so close. Like one of those movie bounty hunters who has pursued a particular fugitive for a long time, I cannot go for coffee now that the fugitive has been sighted. However, at my age I find I need two days rest after pursuing the most intense thinking for one day. Total forced mental focus was something I learned in math and then adapted for dummies from reading about Newton and Tesla. Now, in math I have to force it, because it hurts me to concentrate that hard on x, y and z in a foreign medium, whereas in a field like fiction writing I can stay immersed indefintely without forcing myself to, indeed without ever becoming conscious of the need to force myself to do anything.

The creative process is so much more enjoyable while it is happening, but when titantic struggles with x, y and z are over and have resulted in a surrender without terms, I find my picture of the universe is altered, and my picture of myself, as well. In this medium of math I am no natural, but I am a zealous convert.

----------


## Dreamwoven

I am lost in this discussion, never was any good with maths. This discussion of time-warps and the past and future of Space-Time may be of interest: http://www.space.com/31495-space-tim...ekly_2016-1-04

----------


## desiresjab

I finally reeled Eisenstein in all the way. Of course that does not mean I understand Eisenstein as well as Eisenstein did. I feel like champagne anyway.

----------


## YesNo

Interesting survey article on spacetime, Dreamwoven. In particular this quote:

"It might be that space-time at very short distances takes yet another form and perhaps is not continuous," Amendola said.
Congratulations, desiresjab! I was reading this about quadratic reciprocity. It is a very elementary summary of it: http://sites.millersville.edu/bikena...-residues.html

The part I was interested in was if a prime is a primitive root for another prime it would have to be a quadratic nonresidue. I haven't tried to understand Eisenstein's proof.

----------


## desiresjab

> Interesting survey article on spacetime, Dreamwoven. In particular this quote:
> 
> "It might be that space-time at very short distances takes yet another form and perhaps is not continuous," Amendola said.
> Congratulations, desiresjab! I was reading this about quadratic reciprocity. It is a very elementary summary of it: http://sites.millersville.edu/bikena...-residues.html
> 
> The part I was interested in was if a prime is a primitive root for another prime it would have to be a quadratic nonresidue. I haven't tried to understand Eisenstein's proof.


You are welcome. As usual, one wonders why he didn't see it sooner. I guess it is the leap from the quadratic to the linear. -1 is a dummy base, only good for determining if the exponentiated value is positive or negative. His whole lattice graph is linear, yet it explains a quadratic law. People like Eisenstein are not really human, they just share some genes with the rest of us. 

I am going to toy around with making a new encryption system. If nothing else, I will learn the present systems better. I have to come up with an appropriate function.

* * * * *

There is no guarantee that quantum theory and Einsteinian physics are uniteable. Parts of Einstein are already dated, Ed Mitchell has said. What if _never the twain shall meet_?

----------


## YesNo

If those quantum computers ever happen beyond a few qubits we may need a new encryption system.

----------


## desiresjab

> If those quantum computers ever happen beyond a few qubits we may need a new encryption system.


Yes, and it is hard to even imagine what it might be. I have been kicking around some ideas for a system that would be easily patentable. The normal trick is to multiply two huge primes p and q together to produce n, which will be part of the modulus. Pick a number e relatively prime to n as an exponent to encrypt the message, as in Me. Then you find the inverse of e (mod φ(n)). It is this e-1 which will be used to untangle the message on the other end. You cannot find φ(n) without knowing the factors of n. That is the immense diffculty they impose on hackers--they have to find φ(n) to get anywhere.

I am looking for a system that does not rely on factoring. Encryption is one area of math with a big, big future. I have a few wild ideas.

----------


## YesNo

Maybe using larger primes will keep the current methods going until something better comes along. 

I'm still putting the pieces together on the Artin conjecture. A simpler question would be "Given a number m, are there infinitely many primes p for which m is a quadratic nonresidue?" This would be a larger set since a quadratic nonresidue does not have to be a primitive root such as 8 or 12 mod 19 unless I calculated it wrong.

----------


## desiresjab

> Maybe using larger primes will keep the current methods going until something better comes along. 
> 
> I'm still putting the pieces together on the Artin conjecture. A simpler question would be "Given a number m, are there infinitely many primes p for which m is a quadratic nonresidue?" This would be a larger set since a quadratic nonresidue does not have to be a primitive root such as 8 or 12 mod 19 unless I calculated it wrong.


One aleph is as big as the next aleph.

----------


## YesNo

I thought Aleph one was strictly bigger than Aleph null.

----------


## desiresjab

> I thought Aleph one was strictly bigger than Aleph null.


Otherwise known as Aleph nought, as well, the "smallest" infinity. They are the same entity, all of them, as far as I know.

To make all possible sets from a set of size n, take 2n. The next set after aleph, is 2 raised to the aleph power. You can repeat the process over and over getting bigger sets. No one knows which of these powers has the power of the continuum.

There is an infinite heirarchy of infinities, each theoretically greater than the next, but we only have examples of two kinds. One can be thought of as the counting numbers, and the other is the uncountable points on a line, i.e. the irrataional numbers, more specifically the transcendental numbers.

The former do indeed have the cardinality of aleph, and the transcendental numbers may have the cardinality of the continuum. I believe the latter is not known for sure. Kronecker would certainly disapprove.

----------


## Dreamwoven

Impressive knowledge!

----------


## YesNo

The set of functions over the reals should make a larger infinity than the reals. Which is larger than the integers or rationals. But Kronecker may be right that all of that probably doesn't matter.

----------


## Dreamwoven

You guys have learned a lot of stuff that I never did in my London school in 1956. GCE Ordinary Level. Never did math beyond that.

----------


## YesNo

I wish I knew some physics and chemistry. What I know of quantum physics can be traced to someone posting something about many worlds on Lit Net and then going off to the library or the internet to try to make sense out of it. I think I know enough about many worlds at the moment to be able to reject it. The same with black holes, but I am less sure about black holes than I am about many worlds. I didn't even know about the big bang (except as some vague idea) until someone posted that the universe started from "nothing" including space and time. I got to the library as soon as I heard that. One of the most shocking moments of enlightenment was when I heard about a youtube video claiming that we never put a man on the moon. It took two days to get over that and now I'm convinced.

So now I figure if they can put man on the moon, I can solve the Artin conjecture or desiresjab can come up with a new cryptography method.

----------


## desiresjab

> The set of functions over the reals should make a larger infinity than the reals. Which is larger than the integers or rationals. But Kronecker may be right that all of that probably doesn't matter.


The set of functions over the reals will not be of greater cardinality than the reals themselves. Even the transcendentals are part of the reals, and are of course as large as the whole set, in the same way that the set of even numbers is as large as all of the rationals. Cantor proved this. All that matters is whether you can map elements from one set to the other with a one-to-one correspondence. You could produce this correspondence between sets seeminlgly so sparse as the square numbers and one as dense as the rationals. They are both aleph nought in cardinality.

----------


## desiresjab

> I wish I knew some physics and chemistry. What I know of quantum physics can be traced to someone posting something about many worlds on Lit Net and then going off to the library or the internet to try to make sense out of it. I think I know enough about many worlds at the moment to be able to reject it. The same with black holes, but I am less sure about black holes than I am about many worlds. I didn't even know about the big bang (except as some vague idea) until someone posted that the universe started from "nothing" including space and time. I got to the library as soon as I heard that. One of the most shocking moments of enlightenment was when I heard about a youtube video claiming that we never put a man on the moon. It took two days to get over that and now I'm convinced.
> 
> So now I figure if they can put man on the moon, I can solve the Artin conjecture or desiresjab can come up with a new cryptography method.


Good sir, none of us knows enough. The day belongs to those who seize it. Here I am at sunset trying to seize the day. Far be it from me to discourage any research. I wish you luck. Artin's conjecture is really formidable. It is one of those questions that has attracted the best quality of research. You will need to become something of an ace at modular arithmetic, since the conjecture deals with that branch, not normal algebra. The more tools you have the better you can understand what has already been done. You cannot scale Everest without some climbing gear. First, be very sure of what reciprocity means.

The guy in the link below helped me a lot with his articles. After his article on QR, the next article on biquadratic reciprocity is missing, but the ones after that are already written and posted. He ties Pytharorean triplets to reciprocity. This article gives a good idea of just how deep mere QR is, let alone general reciprocity. Hope I have not posted it before, but it is worth a read. I spent a long time on it and have probably read it ten or fifteen times. Anyone who has an easy time with this probably should have been a mathematician.

http://science.larouchepac.com/gauss...ciprocity.html

----------


## YesNo

Thanks for the link. If you have any more please post them. I realize there is a lot to get familiar with before I would even know that I solved anything at all. 

The first step is to show for the number 3 that there are infinitely many primes for which 3 is a quadratic nonresidue. I am sure someone has done that already. Then I would need to know what the additional conditions are to guarantee that 3 was a primitive root as well.

----------


## desiresjab

> Thanks for the link. If you have any more please post them. I realize there is a lot to get familiar with before I would even know that I solved anything at all. 
> 
> The first step is to show for the number 3 that there are infinitely many primes for which 3 is a quadratic nonresidue. I am sure someone has done that already. Then I would need to know what the additional conditions are to guarantee that 3 was a primitive root as well.


Yes, that could be one approach. 3 is a quadratic nonresidue of all its 4n+3 residues, and a nonresidue of all its 4n+1 nonresidues. Primes have exactly as many residues as they have nonresidues under their modulus. That seems like a decent place to start nosing around for truffles. Can you smell that truffle?

----------


## YesNo

Let's see if I got this right. If I want to know if 3 is a nonresidue with respect to p, then I need to calculate (3|p)=31/2(p-1) mod p. If the value is -1 then it is a nonresidue. If it is 1 then it is a residue.

I might be able to get this information using quadratic reciprocity for p and q being odd primes. The formula is (p|q)(q|p) = (-1)1/2(p-1)*1/2(q-1).

Let q = 3, since that is the number I am interested in. Then 1/2(q-1) can be simplified to 1/2(3-1) = 1/2(2) = 1, so I can write the quadratic reciprocity rule as follows for q = 3.

(p|3)(3|p) = (-1)1/2(p-1)

Now what? What I am trying to find is (3|p), but with QR I also need to find (p|3).

----------


## YesNo

> The set of functions over the reals will not be of greater cardinality than the reals themselves. Even the transcendentals are part of the reals, and are of course as large as the whole set, in the same way that the set of even numbers is as large as all of the rationals. Cantor proved this. All that matters is whether you can map elements from one set to the other with a one-to-one correspondence. You could produce this correspondence between sets seeminlgly so sparse as the square numbers and one as dense as the rationals. They are both aleph nought in cardinality.


I'll have to think about this one and maybe do some searching. 

I suspect one could assume there is a 1-1 mapping between the reals and the functions over the reals and then construct a function that is not in that set using similar substitutions that Cantor did to show that the reals are larger than the integers, however, I wonder about the details.

----------


## desiresjab

Well, I think we do not know of a set larger than the reals, since the reals have the power of the continuum.

I will try to get to some specific answers to your other questions soon. I just got back online after service being out for a few days. You seem to be on the right track and understanding the subject. There is no reason I shouldn't be surprised at anyone who can do that.

2 and 3 have longer periods than, say, 7. Precisely what this means I am still figuring out.

The amazing thing about the Martinson articles is how they relate 4n+1 numbers to the hypoteneuse in right triangles. Then it goes on to speculate that 4n+1 primes are not really primes at all but an unnamed species of number. I had never heard or seen anything like that before.

----------


## swathisharan

cosmology is the best course as every one had craze about their beauty.

----------


## YesNo

> The amazing thing about the Martinson articles is how they relate 4n+1 numbers to the hypoteneuse in right triangles. Then it goes on to speculate that 4n+1 primes are not really primes at all but an unnamed species of number. I had never heard or seen anything like that before.


I liked that about the article as well. I have been wondering why QR is so interesting. If it goes back to Pythagoras that would explain it. I assumed the 4n + 1 numbers were just a special subset of primes when I read that although the article suggests for some unknown reason that they are not prime numbers. 

There was also a tension between Gauss and Euler that I was unaware of. Gauss seemed to prefer Fermat. And there was something deliberately hidden that associated Guass with Kepler.

Also I tried reading LaRouche's article and it didn't make sense. I have no problem with the economy being in the toilet, but I didn't understand why he thought it was.

----------


## desiresjab

> I liked that about the article as well. I have been wondering why QR is so interesting. If it goes back to Pythagoras that would explain it. I assumed the 4n + 1 numbers were just a special subset of primes when I read that although the article suggests for some unknown reason that they are not prime numbers. 
> 
> There was also a tension between Gauss and Euler that I was unaware of. Gauss seemed to prefer Fermat. And there was something deliberately hidden that associated Guass with Kepler.
> 
> Also I tried reading LaRouche's article and it didn't make sense. I have no problem with the economy being in the toilet, but I didn't understand why he thought it was.


Gauss was a teenager when he worked out QR. He did not know about the work of Euler, LaGrange and a few others in that area, according to him, and acheived his results independently. When he was almost done with Disquisitions, he found their work and set about cataloguing it along with all of number theory as it was known in Europe at that time. Then he launched his ship Disquisitions, one of the supreme texts of mankind, and one of the least heard of. One out of a million people reads it. Even fewer understand what they have read. Gauss is the guy to give you the law, but not the guy to help you understand it.

Euler was the guy to help you understand things, who would show you his failed attempts as well as his successes. Euler was a natural teacher. One knows full well that Gauss was accessible to only world class geniuses. Neither one of these guys stood before thronged classrooms of students, but the expository nature of Euler's writing style and the way an unusually fine personality was bolded forth is a beautiful thing to see in history.

Martinson's apparent odium for Euler is baffling to me. The way he discount's the entire latter half of Euler's career is shocking, but it certainly does create interest in the article. I would not accuse this gentleman of shock-jocking, though...ahem!

For a fact, Euler tried to untangle QR and failed only by a hair. Legrendre tried, too, and came close in a slightly flawed proof. Legendre was an ATG, but he lived in Gauss's shadow like Gehrig in Ruth's. _The method of least squares_ was snatched away from him by history and Gauss.

The difference between Euler and Legendre is that Euler would make an ATG top top ten list in mathematics. On the Mt. Rushmore of mathematics, after Archimedes, Newton and Gauss, Euler is a powerful contender for the fourth spot.

Any tension between Euler and Gauss would have been based on the work alone, and strictly one-way, for Euler was dead by the time Gauss arrived on the world scene. Gauss was six when Euler died.

Like Newton, Gauss was a curmedgeonly neurotic, parsimonious with praise. He used a Latin word that praised Euler, but reserved for Newton the appellation of _summa_.

----------


## YesNo

Thanks for setting me straight on Gauss. I was beginning to think there was something wrong with the "turncoat" Euler as Martinson described him, but I realized I had no reason to trust Martinson's view either.

I downloaded Gauss' Disquisitiones in Latin. I should be able to use Google Translate to get around to the parts I might find interesting. I found a copy of Leonard Dickson's "Introduction to the Theory of Numbers". I figure I better know what is in a book like that.

----------


## desiresjab

The elementary number theory text I got the most out of was H.L. Davenport's Higher Arithmetic. I always find it useful to keep a number of texts around in case one fails me. I found the section on primitive roots to be very lucid.

I do not think QR goes all the way back to Pythagoras, or anywhere near. I am almost certain the theory was unknown to the Greeks. I do not know of any mention of it in the ancient world, even by Diophantus. Unlike the compass and straight edge problem Gauss solved from the ancients, QR seems to have been discovered by Euler in the west and proven by Gauss.

----------


## desiresjab

Wayward Fact #1:

-1 is never a quadratic residue of primes of the form 4n+3, and always a quadratic residue of primes of the form 4n+1.

Wayward fact #2:

The behavior of 2 with respect to QR is different from the odd primes, as we might expect, but still fits into the theory consistently.

----------


## YesNo

Yes, I suspect QR is a relatively recent idea. 

Regarding the first wayward fact, -1 is p - 1 mod p. One can tell if -1 is a quadratic residue for odd prime p by evaluating (-1|p) = (-1)1/2(p-1). The exponent is even if p is of the form 4m + 1 and odd if p is of the form 4m + 3. It is the primitive root for only 2 and 3, so it is not considered in Artin's conjecture along with the perfect squares.

Here's a problem in Dickson's text (page 21): Show that the product of all primitive roots of a prime p > 3 is congruent to 1 mod p. 

I can see that this makes sense, but I don't know how to prove it. For example, consider p = 5. The primitive roots are 2 and 3 and 2*3 = 1 mod p. One can write 3 = 23 in order to combine it with 2 and then we have 2123 = 24 which should be 1 by Fermat's theorem. So it works for one case, but how would one show that in general? That's the one I'm stuck on at the moment.

----------


## desiresjab

> Yes, I suspect QR is a relatively recent idea. 
> 
> Regarding the first wayward fact, -1 is p - 1 mod p. One can tell if -1 is a quadratic residue for odd prime p by evaluating (-1|p) = (-1)1/2(p-1). The exponent is even if p is of the form 4m + 1 and odd if p is of the form 4m + 3. It is the primitive root for only 2 and 3, so it is not considered in Artin's conjecture along with the perfect squares.
> 
> Here's a problem in Dickson's text (page 21): Show that the product of all primitive roots of a prime p > 3 is congruent to 1 mod p. 
> 
> I can see that this makes sense, but I don't know how to prove it. For example, consider p = 5. The primitive roots are 2 and 3 and 2*3 = 1 mod p. One can write 3 = 23 in order to combine it with 2 and then we have 2123 = 24 which should be 1 by Fermat's theorem. So it works for one case, but how would one show that in general? That's the one I'm stuck on at the moment.


That is an interesting problem. If I may make a suggestion. My intutition is that you need to pair the primitive roots up, since there is usually an even number of them, φ(p-1) of them, actually. The trick is going to be something similar to what I did to prove Fermat's little Theorem. I believe you can pair the roots with their inverses for multiplication and get 1, since if _a_ is a root, _a_-1 is also.

Notice that in your example, 2 and 3 are inverses of each other (mod 5), or else their product would not equal 1. Hope that helps a little.

----------


## desiresjab

In the event that φ(p-1) is not even, it should still work out. You simply multiply a times b, and that will be the inverse of the remaining c (mod p).

----------


## desiresjab

Sorry, duplicate post.

----------


## desiresjab

Now if you multiplied the entire residue system of a prime together, you should also get 1, right? For each element in the set has an inverse, which is also in the set, and there are an even number of elements in the complete reisdue system.

----------


## desiresjab

Come to think of it, the directly above task should actually be easier when φ(p-1) is odd, since 1 is in every set, and must be paired with itself, leaving an even number of numbers to pair.

----------


## desiresjab

Sorry for spreading this out, but you have made me think. The evenness or oddness of the set probably does not matter. In those cases where it seems at first likely to interfere, I would almost bet that it will magically work out because one of the numbers in the set (besides 1) will be its own inverse. I do not have hardcore evidence or a proof, but my experience is telling me that. Anyway, I think I have said enough, if not too much.

----------


## YesNo

Thanks, desiresjab. It makes sense to pair the primitive roots with their inverses. It didn't occur to me that a primitive root's inverse is also a primitive root, but I think that should be the case.

I found another introductory textbook on the subject by Charles Vanden Eynden which I am also reading. When I get stuck with one, I move to the other.

----------


## desiresjab

> Thanks, desiresjab. It makes sense to pair the primitive roots with their inverses. It didn't occur to me that a primitive root's inverse is also a primitive root, but I think that should be the case.
> 
> I found another introductory textbook on the subject by Charles Vanden Eynden which I am also reading. When I get stuck with one, I move to the other.


Pairing members of sets up is a common trick in the field. Do not be afraid of pen and paper. If you can get one move closer to the nut, then you only have to see two moves deep instead of three, etc.

I love the way modular arithmetic forces even the largest numbers to play its game. It puts numbers on the rack and extracts certain truths from them. It says to the gigantic prime: _you are only 2 (mod 3), pal, now get up there_.

----------


## YesNo

I just finished the following article giving a proof of Euler's theorem using pairing based on a number and its inverse. Being relatively prime to n is the same thing as having an inverse mod n. That is (a,n) = 1 <=> there exists x such that ax = 1 mod n. I'll have to keep that in mind. http://sites.millersville.edu/bikena...uler/euler.pdf

----------


## desiresjab

> I just finished the following article giving a proof of Euler's theorem using pairing based on a number and its inverse. Being relatively prime to n is the same thing as having an inverse mod n. That is (a,n) = 1 <=> there exists x such that ax = 1 mod n. I'll have to keep that in mind. http://sites.millersville.edu/bikena...uler/euler.pdf


Yes indeed. There are so many of these little facts and theorems and criteria and lemmas and adjuncts that remembering them all when one of them happens to be relevant is a matter of experience as you see more and more of the mulitiplicty of connections between ideas at the level of numbers.

Only when I can see straight through QR to the intiuitive reason it must be so, could I possibly say God could not create a universe where QR is not as it is in our universe.

The one thing absolutely necessary to our universe or any universe is the same laws of mathematics everywhere. If I can _see_ that about QR, I can _say_ it, but I do not see it with that particular clarity yet, and furthermore do not know if I am capable of that. It is still a goal, though.

I figure God could not create a universe where 2 is not the successor of 1. When I can see the reasons for QR as clearly as 2 succeeding 1, I will know the limits of God, at least from my human perspective.

----------


## desiresjab

My suspicion is, for me the key is to "see through" why two 4n+3 primes behave the way they do, which I call _irreciprocity_.

If I can see why they cannot behave as 4n+1 primes or as a mixed pair, I sense I can see it all by the same method.

It might be that a formal proof exists which would satisfy me fully, but I have no access to it or would not have the tools to understand it. Many proofs enter the terrain of Group theory and Abstract algebra, and depend on quite a few other proofs. I need to upgrade there, but there is not time for everything.

I do not know if it is possible to see it the way I want to see it. I am not even sure that anyone does. Reading the words of math professors on the subject over at the n-category cafe, I can see that even among that level the grasp is dubious, depending on a particular proof usually. One senses the lack of a deep intuitive connection and understanding of why the numbers must behave as they do. Usually because they belong to some group, subgroup or coset, which is shown to be symmetrical or asymmetrical, as the purpose serves, etc., but which is rather far removed from ground level.

I may be trying to see something at ground level that is not visible from ground level.

There was another book I now remember. It was by a gentleman named Weils who was something of a modern giant. This book was a treatment of numbers from a group theoretical standpoint, and I did not take it seriously enough at the time. That might be all I need, not the whole course. The thing is, I already have a decent understanding of what those fields do and say. It is simply the strange notation I have not adapted to on my own. I am such a fuss budget and whiner before I settle down and adapt to what one obviously has to do.

----------


## YesNo

There is a lot to understand, but I try to think of these as pieces of a jig-saw puzzle. Here are the pieces so far in my quest to solve Artin's Conjecture, at least the part that says for any number greater than 1 there are infinitely many primes for which it is a primitive root.

Puzzle Piece 1: Two integers that are relatively prime have inverses with respect to each other. In particular (a,n) = 1 if and only if there exists x such that ax=1 mod n. This means we only have to look at relatively prime integers and φ(n) would represent how many there are. If p is a prime, then φ(p) = p - 1. For simplicity stick with primes and the numbers relatively prime to them.

Puzzle Piece 2: A primitive root a multiplied by itself has to generate all the residues mod n. In particular it can't stop generating a number different from 1 until it generated all of them. Further for any d > 1 dividing p - 1, a(p-1)/d cannot equal 1 mod n. Otherwise it has stopped generating the residues and it is not a primitive root. So for d = 2, if a(p-1)/2 = 1 mod n and therefore was a quadratic residue it would not be a primitive root.

Puzzle Piece 3: If Artin's conjecture is true, then for each a > 1 there exist infinitely many primes for which a is a quadratic nonresidue. The converse is false. However, maybe this is easier to solve if it hasn't already been solved.

Puzzle Piece 4: To simplify matters, let a = 3 which is one of the 4m+3 numbers. If p is another 4m+3 prime then QR can relate them so that calculations work faster, but I can't rely on calculations since I am working with infinitely many primes p. So far QR seems good for calculation, but nothing else.

Puzzle Piece 5 (the one I'm on now): Suppose p is a 4m+3 prime and p - 1 = 2r where r is another prime. Let a = 3. If (3|p) = -1, then I have handled the case when the divisor is 2: 3(p-1)/2 = -1. Does this imply anything about the other divisor (p-1)/r? So, are there infinitely many primes where p-1 = 2r and r is prime and what additional conditions do I need to tell if 3 is a quadratic nonresidue?

At the moment I see QR's value in helping one calculate whether a number is a quadratic residue or not faster. I must be missing something important.

----------


## desiresjab

> There is a lot to understand, but I try to think of these as pieces of a jig-saw puzzle. Here are the pieces so far in my quest to solve Artin's Conjecture, at least the part that says for any number greater than 1 there are infinitely many primes for which it is a primitive root.
> 
> Puzzle Piece 1: Two integers that are relatively prime have inverses with respect to each other. In particular (a,n) = 1 if and only if there exists x such that ax=1 mod n. This means we only have to look at relatively prime integers and φ(n) would represent how many there are. If p is a prime, then φ(p) = p - 1. For simplicity stick with primes and the numbers relatively prime to them.
> 
> Puzzle Piece 2: A primitive root a multiplied by itself has to generate all the residues mod n. In particular it can't stop generating a number different from 1 until it generated all of them. Further for any d > 1 dividing p - 1, a(p-1)/d cannot equal 1 mod n. Otherwise it has stopped generating the residues and it is not a primitive root. So for d = 2, if a(p-1)/2 = 1 mod n and therefore was a quadratic residue it would not be a primitive root.
> 
> Puzzle Piece 3: If Artin's conjecture is true, then for each a > 1 there exist infinitely many primes for which a is a quadratic nonresidue. The converse is false. However, maybe this is easier to solve if it hasn't already been solved.
> 
> Puzzle Piece 4: To simplify matters, let a = 3 which is one of the 4m+3 numbers. If p is another 4m+3 prime then QR can relate them so that calculations work faster, but I can't rely on calculations since I am working with infinitely many primes p. So far QR seems good for calculation, but nothing else.
> ...


Philosophically, my own inclination is toward mathematics as we know it being necessary just as it is. God could not controvert or skirt this necessity, meaning God has limitations. A limted God was an idea of John Stuart Mill.

Very, very true, I could have set the bar anywhere, I could have chosen easier propositions. But I just happened to settle on QR because I knew it was hard, I did not understand it at the time and figured I should earn the right to make such a statement as _God is constrained by mathematics_.

I am so close now. I again sense Eisenstein's proof as the way forward. If one cannot see it in the numbers themselves, see it in the exponents represented by those dots and X's, then backwards extrapolate to the numbers.

----------


## desiresjab

I think a key point to realize about Eisenstein's proof, is that in his triangles AYX and WAY, the number of lattice points in them do not reperesent actual degree of the exponent on p or q, but have the right parity, which is all that matters--odd or even. For instance, the number of even lattice points in his big traingle, with 17 even lattice points, seems to represent the true exponent on -1 for p, whereas the triangles AYX and WAY merely give the right parity for p or q, which is suffiucent, indeed, but not quite the same thing.

In the event of two 4n+3 primes, the two small triangles will have opposite parity, which forces -1 as the final outcome of the operation.

I am conjecturing that the triangles AYX and WAY will never contain the same quantity of total lattice points, but their parity will be in accord when both are not 4n+3 types.

This will at first seem strange, as the two large triangles ABC and ADC always have the same total number of lattice points, of course. This does not mean one contains the same number of even or odd points as the other, however, for indeed they do not. Only their total number is equal.

Let's forget I made that conjecture, since it is false. You could say it is usually true but not always. I think they can be equal when the two primes are near the same size and their QR value is 1, and when, of course, they are both 4n+1 types only..

----------


## desiresjab

This goes to show how dangerous the conjecture game in mathematics is. I amended my above post four or five times until I finally saw the truth. The quantity of lattice points in the triangles AYX and WAY can be equal when the two primes are close enough in size to each other, and at least one is a 4n+1 type. I don't think that fact even has much significance. Red Herring.

----------


## desiresjab

Even so, the two triangles will have different quantities of even and odd lattice points, though the total number of lattices in each is the same. I am done with that. Can those two triangles ever have the same number of even points and the same number of odd points? I don't know, y'all, and I ain't gonna think about it. However, I think they cannot. Watch out! Watch out!

----------


## desiresjab

Hold it, dummy (speaking to myself). The natural exponent for -1, ie., the one which duplicates (p-1)/2 (q-1)/2, is found by summing the total number of lattice points in the triangles AXY and WAY. Ah, now that is good. We have gotten somewhere.

In the case of 11 and 13 for p and q, the exponent would be 30, which is even and therefore produces 1 when used as an exponent. 

-1(11-1)/2 (13-1)/2=-130=1

----------


## YesNo

Here are some links that I plan to look at more closely on Eisenstein's proof of QR to see if I can understand this. Do you have some links?

https://en.wikipedia.org/wiki/Proofs...ic_reciprocity

http://math.ucsb.edu/~jcs/QuadraticReciprocity.pdf

----------


## desiresjab

> Here are some links that I plan to look at more closely on Eisenstein's proof of QR to see if I can understand this. Do you have some links?
> 
> https://en.wikipedia.org/wiki/Proofs...ic_reciprocity
> 
> http://math.ucsb.edu/~jcs/QuadraticReciprocity.pdf


I have used so many cites I could not begin to dig them up.

Make a p X q rectangle on graphing paper. Draw a diagonal carefully. 19 by 23 was the largest rectangle my paper allowed me to draw. There are enough primes below 23 to get the picture.

The exact number of lattice points corresponding to (p-1)/2 (q-1)/2 will be found in the triangles AYX and WAY within the rectangle (p/2)(q/2). Watch what happens in those two triangles as you construct rectangles for different primes and prime types. The total number of points in either triangle is the same, they do not have the same quantity of odds or evens.

The borders of (p/2)(q/2) are between lines on the graphing paper, ensuring that we have no lattice points on the perimeter. The perimeter of the invisibleble rectangle with lattice points on the perimeter of it would have dimensions (p-1)/2 (q-1)/2. This is the geometric connection between (p-1)/2 (q-1)/2 and (p/2)(q/2). I am not sure how clear that is. I am trying to bring you up to my current understanding on the problem.

I see it now. I see how those small triangles work and why they do. I am now about 98% satisfied with my undertstanding of Eisenstein's proof. It is no longer a mystery why the lattice points and the exponents match up.

God could only create a universe where QR is true, if our imaginations are asked to judge. No QR in our universe is every bit as absurd a notion to our brains as _2 is not the successor of 1_. Over and out.

----------


## YesNo

I see the diagram. Also it makes sense that there is no lattice point on the diagonal line, y = (p/q)x, since p and q are primes. That is, an integer value for x would not make y an integer. I also see how there are (p-1)/2 (q-1)/2 lattice points in the two triangles, AYX and WAY. What I don't see is the connection between those lattice points and something that will discriminate between 1 or -1. All I can see is the overall count is correct. This seems to me like I am missing something.

I remember reading that Galileo pointed his telescope to Jupiter and asked one of his friends to look. Even though his friend was willing to agree with him, he didn't understand that what he was looking at were Jupiter's moons rather than more stars and so the evidence didn't convince him. I am sort of like that with this proof at the moment. Proofs are like spaghetti code until one understands them. Unraveling the spaghetti takes time. After understanding, one can try making a new proof that might be easier to understand. I hear there are hundreds of proofs for QR.

The Gauss Lemma makes more geometric sense to me at the moment than this one does. Start with Fermat's Theorem, ap-1 = 1 mod p. The quadratic residues would have a(p-1)/2 = 1 mod p as well. And so one already has one way to calculate whether a is a quadratic residue or not by repeated multiplication of a. The lemma replaces a with -1, which simplifies the calculation as far as the multiplication goes, but also complicates it since the exponent is no longer (p-1)/2, but the number of negative elements when reduced mod p to numbers between -(p-1)/2 and (p-1)/2. Finding that exponent is now the hard part.

Geometrically that makes sense to me and, by the way, it also helps solve the puzzle piece I was working on. It seems there are infinitely many primes p for which 3 is a quadratic nonresidue and hence a potential primitive root. I would now need to generalize that.

----------


## desiresjab

Sometimes it is difficult for me to determine if you are talking about QR or PR, or searching for a connection between the two. I have to go out on a limb and say provisionally I do not think the two are hugely connected. They are connected some way, however, because just about all number theoretic functions are connected, no matter how distantly. Their connection may even be important.

This could very well be a fault in my own vision. From looking at the problem so long my own way I may have developed myopia. My brain is open for business, though.

I know for a fact that Eisenstein went into his proof with much knowledge. He already knew the significance of (p-1)/2 and (q-1)/2, which is why he made the lattice points in AXY and WAY match up to them one-to-one.

At this point we have much knowledge, too. For instance, we do not have to consult anything to know that a 4n+1 prime will always have -1 as a quadratic residue and 4n+3 primes will never.

You seem to be aking: _where is this information located in Eisenstein's rectangles?_ I have to say at this point I do not know if it even is. This is information we already know, and I am unaware of that information being graphically represented in Eisenstein's diagram at this point. I will search for its presence, however, for I am not fool enough to think there is nothing obvious I might be missing.

----------


## desiresjab

Gauss knew Eisenstein. He may have beeb Gauss's student. What I suspect is that Eisenstein found the most elementary proof possible. This what mathematicians always strive for. If one man's proof requires calculus and a second man's proof requires only algebra, the second proof is considered more elegant.

For Gauss to have been all around this proof only to have Eisenstein find and present it--did this rasp the old man? Gauss demoted Euler because he was so close to QR and did not get it. Yet he stood within inches (figuratively) of this be-all end-all of QR proofs.

Hundreds more proofs were to come, but we know none are as elegant as Eisenstein's. The fact that Wikipejia chose it is testament of this. Every other proof I have looked at is a devil, and requires higher concepts.

What was Gauss thinking when he made his famous comment about Eisenstein? Gauss made seven or eight proofs of QR in his lifetime. I would be willing to bet each illuminated a different aspect of it, or Gauss would not have bothered. The fact that he was still working on it throughout his lifetime probably means even he, the mightiest of mighty, felt he did not have full grasp of it. Why else would a man with so many other important things to get to still be fussing with QR decades after he solved it?

This means we sure as heck do not have to feel bad or guilty for only having partial understanding of this theory. Gauss had the telegraph to invent, conformal mapping to forrmalize, magnetism to overhaul, differential geometry to launch, yet he kept coming back to QR his entire life to produce more proofs. Ask youself, would he have done this if he had every bit of understanding he felt he needed on the topic?

He felt it was his crowning acheivement. This fellow who as a teenager cracked a seventeen hundred year old problem that had puzzled the ancients, who formalized modular arithmetic, who presented the first proof of the fundamental theorem of algebra, who built the algebraic structure for imaginary numbers--he considered QR the greatest (and perhaps the deepest) of his acheivements.

If we understood QR quickly and easily, something would be wrong. Powerhouses like Gauss and Euler and Legendre do not struggle mightily only for us to come along and breezily understand at will. Make no mistake about it, this stuff is hard. QR is a gateway to the really hard in number theory. It is always presented at the end of elementary number theory couses. After that you are no longer on the wading end of the pool--you swim or sink in those deep waters.

----------


## Dreamwoven

I wonder if this website would be helpful?
http://math.stackexchange.com/questi...olumn-pivoting

----------


## YesNo

I looked up "QR decomposition" and it seems to be concerned with factoring matrices. https://en.wikipedia.org/wiki/QR_decomposition It may be related, but I don't see how at the moment. Quadratic reciprocity, which we abbreviated here as QR, is about whether an integer x is a square modulo a prime p, that is, does there exist an integer r such that r2 = x mod p? If so (x|p), the Legendre notation for whether x is a quadratic residue mod p, would equal 1.

What I am looking at is the Artin's conjecture which says given a number, m>1, there are infinitely many primes for which m is a primitive root. That is, multiply m by itself over and over again and all the elements of the reduced residue system mod that prime are generated. If m is a primitive root then it is also a quadratic non-residue, otherwise it would not generate all the elements, but stop half way through.

Desiresjab is interested in quadratic reciprocity and in particular Eisenstein's proof of it. I find that interesting also, because the more I learn about that the more I understand why Artin's conjecture is hard to solve. 

Here is an outline of a proof of quadratic reciprocity using Eisenstein's lattice points: http://math.ucr.edu/home/baez/136/quadratic.pdf

The article doesn't prove anything. It just states the propositions, which is frustrating, but it only claimed to offer a "big picture". The proposition that gets me stuck is called in that paper "Baby Eisenstein's Lemma". It says that the number of points in the lattice in the lower triangle in Eisenstein's drawing has the same parity (even or odd) as the number of elements in Gauss Lemma that fall in the negative part of the reduced residue system from -(p-1)/2 to (p-1)/2). If we know how many there are then m is a quadratic residue if that number is even and a quadratic non-residue if that number is odd. So, it is not that they match one-to-one but that they have the same parity. 

That's the clue I am following at the moment. It is only a parity issue between those lattice points and the elements in Gauss Lemma. However, I don't know how to prove that which means I don't understand it.

----------


## desiresjab

> I looked up "QR decomposition" and it seems to be concerned with factoring matrices. https://en.wikipedia.org/wiki/QR_decomposition It may be related, but I don't see how at the moment. Quadratic reciprocity, which we abbreviated here as QR, is about whether an integer x is a square modulo a prime p, that is, does there exist an integer r such that r2 = x mod p? If so (x|p), the Legendre notation for whether x is a quadratic residue mod p, would equal 1.
> 
> What I am looking at is the Artin's conjecture which says given a number, m>1, there are infinitely many primes for which m is a primitive root. That is, multiply m by itself over and over again and all the elements of the reduced residue system mod that prime are generated. If m is a primitive root then it is also a quadratic non-residue, otherwise it would not generate all the elements, but stop half way through.
> 
> Desiresjab is interested in quadratic reciprocity and in particular Eisenstein's proof of it. I find that interesting also, because the more I learn about that the more I understand why Artin's conjecture is hard to solve. 
> 
> Here is an outline of a proof of quadratic reciprocity using Eisenstein's lattice points: http://math.ucr.edu/home/baez/136/quadratic.pdf
> 
> The article doesn't prove anything. It just states the propositions, which is frustrating, but it only claimed to offer a "big picture". The proposition that gets me stuck is called in that paper "Baby Eisenstein's Lemma". It says that the number of points in the lattice in the lower triangle in Eisenstein's drawing has the same parity (even or odd) as the number of elements in Gauss Lemma that fall in the negative part of the reduced residue system from -(p-1)/2 to (p-1)/2). If we know how many there are then m is a quadratic residue if that number is even and a quadratic non-residue if that number is odd. So, it is not that they match one-to-one but that they have the same parity. 
> ...


The number of lattice points in Eisenstein's triangles AYX and WAY give the exact value of the exponents, not just the correct parity.

----------


## desiresjab

That is, they give the correct sum of total exponents. (12X10)/4=15+15

----------


## desiresjab

As far as I can tell quadratic reciprocity and QR as used in those matrices are different functions. I think QR means something else with regard to those matrices. I do not think it means quadratic reciprocity.

----------


## desiresjab

My posts are disappearing.

----------


## desiresjab

My own unsloved problem is Brocard's problem. In mod arithmetic it might be stated thus:

q2≡1 (mod (p!)). Things that look as if they should be simple, turn out to be near impossible.

----------


## YesNo

> The number of lattice points in Eisenstein's triangles AYX and WAY give the exact value of the exponents, not just the correct parity.


That's true. The exponent (p-1)/2*(q-1)/2 are the number of lattice points within the triangles AYX and WAY. However, the Gauss Lemma comes up with a smaller exponent. My difficulty is how to show that the smaller exponent can be replaced by the larger one so that the calculation depends only on p and q.

For example, let p = 11 and q = 13. Then (p-1)/2 = 5 and (q-1)/2 = 6. To see what the Gauss Lemma provides consider the numbers from 1 to (13-1)/2 = 6 or {1,2,3,4,5,6} and multiply them by p = 11 mod 13. This gives {11,9,7,5,3,2}. Values over 6 could be viewed as negative if we used the residue set between -6 and 6 mod 13 rather than the one between 0 and 12 mod 13. There are 3 values larger than 6, namely, {11,9,7}, and -13 = -1 = 11(13-1)/2 mod 13 = 116 mod 13. That would be (p|q) = (11|13) = -1. The Gauss Lemma states that is another way to calculate (p|q) rather than to raise p to the (q-1)/2 power mod q.

Considering (13|11), we would look at this set of residues mod 11 {1,2,3,4,5}. By the Gauss Lemma we multiply each of them by 13 and get the following {2,4,6,8,10}. Now we count those in the set greater than 5 = (11-1)/2 and find there are 3 of them. So -13 = -1 = 13(11-1)/2 mod 11 = 135 mod 11. That would be (q|p) = (13|11) = -1.

Since p = 11 = 3 mod 4 and q = 13 = 1 mod 4, quadratic reciprocity says that (p|q) = (q|p) which turns out to be the case since they both equal -1.

My problem is, I understand the proof of Gauss Lemma which gives exponents of 3 for both p and q. But I don't see how either Gauss or Eisenstein raises that exponent to 5 and 6. I can see why they would want such exponents. It would would be easier to calculate.

----------


## desiresjab

> That's true. The exponent (p-1)/2*(q-1)/2 are the number of lattice points within the triangles AYX and WAY. However, the Gauss Lemma comes up with a smaller exponent. My difficulty is how to show that the smaller exponent can be replaced by the larger one so that the calculation depends only on p and q.
> 
> For example, let p = 11 and q = 13. Then (p-1)/2 = 5 and (q-1)/2 = 6. To see what the Gauss Lemma provides consider the numbers from 1 to (13-1)/2 = 6 or {1,2,3,4,5,6} and multiply them by p = 11 mod 13. This gives {11,9,7,5,3,2}. Values over 6 could be viewed as negative if we used the residue set between -6 and 6 mod 13 rather than the one between 0 and 12 mod 13. There are 3 values larger than 6, namely, {11,9,7}, and -13 = -1 = 11(13-1)/2 mod 13 = 116 mod 13. That would be (p|q) = (11|13) = -1. The Gauss Lemma states that is another way to calculate (p|q) rather than to raise p to the (q-1)/2 power mod q.
> 
> Considering (13|11), we would look at this set of residues mod 11 {1,2,3,4,5}. By the Gauss Lemma we multiply each of them by 13 and get the following {2,4,6,8,10}. Now we count those in the set greater than 5 = (11-1)/2 and find there are 3 of them. So -13 = -1 = 13(11-1)/2 mod 11 = 135 mod 11. That would be (q|p) = (13|11) = -1.
> 
> Since p = 11 = 3 mod 4 and q = 13 = 1 mod 4, quadratic reciprocity says that (p|q) = (q|p) which turns out to be the case since they both equal -1.
> 
> My problem is, I understand the proof of Gauss Lemma which gives exponents of 3 for both p and q. But I don't see how either Gauss or Eisenstein raises that exponent to 5 and 6. I can see why they would want such exponents. It would would be easier to calculate.


5 and 6 are the effective dimensions of the smaller rectangle comprised of the triangles AYX and WAY. These match up with Euler's criterion. These are the ones you want, I believe. Adding 1/2 to their dimensions allows lattices points on the perimeter of the smaller rectangle with dimension 5 and 6. Therefore they are all in the interior of the rectangle augmented by 1/2 in its dimensions.

5 and 6 get the right product, but I do not see anything in the triangles denoting a significance of 5 and 6, other than their product and their opposite parity. I also see nothing which tells me about -1, other than a fact we already know--that -1 is a residue of all 4n+1 primes. Since we know that already about primes, it does not seem important to me that the diagram does not speak to that aspect.

----------


## YesNo

It looks like the exponent of -1 in the following

(p|q) = (-1)(p-1)(q-1)/4 (q|p)
is just an algebraic way to write the English phrase

(p|q) = (q|p) unless p and q are both congruent to 3 mod 4 in which case (p|q) = -(q|p).
It also occurred to me the main reason for quadratic reciprocity is for calculation purposes. This theorem allows us to flip the p and q and then reduce the larger one.

----------


## desiresjab

> It looks like the exponent of -1 in the following
> 
> (p|q) = (-1)(p-1)(q-1)/4 (q|p)
> is just an algebraic way to write the English phrase
> 
> (p|q) = (q|p) unless p and q are both congruent to 3 mod 4 in which case (p|q) = -(q|p).
> It also occurred to me the main reason for quadratic reciprocity is for calculation purposes. This theorem allows us to flip the p and q and then reduce the larger one.


I'm not too sharp right now. I have come down with the flu or something. Reciprocity is easy to state in English. The first step to understanding it is to learn enough math (mostly modular) to understand the English.

Now of course it has a much bigger reason--the razzle dazzle of ciphers.

I'll be back after I throw up and sleep.

----------


## YesNo

Get well! 

The more I look at this the more I feel like I am still in the shallow end of the pool.

Edit: After reading a proof, different from Eisenstein's, I began to see how the primes p and q are connected: They are each related to their product, pq. 

Also the proof depended upon something nearly obvious: if three integers are added together and their sum is an even number then either all three of the integers are even or only one of them is.

http://www.lehigh.edu/~shw2/q-recip/gauss5.pdf

----------


## desiresjab

> Get well! 
> 
> The more I look at this the more I feel like I am still in the shallow end of the pool.
> 
> Edit: After reading a proof, different from Eisenstein's, I began to see how the primes p and q are connected: They are each related to their product, pq. 
> 
> Also the proof depended upon something nearly obvious: if three integers are added together and their sum is an even number then either all three of the integers are even or only one of them is.
> 
> http://www.lehigh.edu/~shw2/q-recip/gauss5.pdf


Back with better health. 

How Gauss determines that what me might call the _overflow values_ are enough to determine the quadratic relationship is algebraic magic, of course, but not necessarily transparent. It gives another fact, but the torch in the fact is hard to light. There is extreme familiarity with modular operations and the Chinese remainder theorem and how they all apply. Seeing how every rule you need applies at the right time and place is always going to be the case. In one's personal investigations, if one misses one of these, a great deal of time can be lost but not necessarily wasted in chasing down proof along the way of something that boils down after all to a basic law of modular arithemtic the explorer has not yet assimilated into his mathematical vocabulary fully enough so that its applications and assumptions come naturally as they do in normal algebra.

No matter how simple they continue to try to make QR, it always turns out pretty complex, except in terms of the laws themselves, which are clear and easy to apply. In asking why the two species of primes themselves behave the way they do with themselves and with the other species, it is profitable to remember that they only do so in modular arithmetic, where QR is a theorem. The comparison to normal arithmetic in the Martinson link I gave earlier is still the most illuminative and suggestive article I have seen yet. For that reason, I believe a good review of Fermat's and LaGrange's sums of squares is in order. I did this in cursory fashion a few months back, without settling in for the full ride with different hosts.

Another nagging propositon I looked at only a few weeks ago is Bertrand's paradox. It deals with geometry and the _power of the continuum_ infinity of points on the surface of a sphere. Someone gives a good explanation on YouTube of how it is mathematically possible to dissect a sphere of diamter X and reassemble the parts into two full and complete spheres of diameter X without adding any new material. No man or machine could actually make these slices and chops, but in theory it is feasible. Or is it an unresolved paradox of infinite set theory? It deserves a second dip. So many things to chase down.

----------


## desiresjab

To continue in the same vein, the quote _But this is just the set of integers_, lifted directly from the end of the Gauss proof is, in English, what always happens at the conclusion of proofs in this mode of math, now isn't it? The results of operations on a residue system are shown to be equivalent to another set of integers previously defined. When all you need is parity to prove your point the sets do not even need the same cardinality to yield their information. This is heading toward set theory and group theory. I am still stuck on the idea of an easier vantage to peer at the heart of the law and see naked numbers bathing.

----------


## desiresjab

Deciding how one can show some particular set equals another set or subset must be the normal way to proceed then, what you strive to frame your question in terms of in modular forests. That is my nutshell observation.

----------


## YesNo

> Another nagging propositon I looked at only a few weeks ago is Bertrand's paradox. It deals with geometry and the _power of the continuum_ infinity of points on the surface of a sphere. Someone gives a good explanation on YouTube of how it is mathematically possible to dissect a sphere of diamter X and reassemble the parts into two full and complete spheres of diameter X without adding any new material. No man or machine could actually make these slices and chops, but in theory it is feasible. Or is it an unresolved paradox of infinite set theory? It deserves a second dip. So many things to chase down.


I looked around on YouTube and found this description of Bertrand's Paradox: https://www.youtube.com/watch?v=uI2FnUmBeeo

It seems that the paradox is resolved once one defines what it means to "choose a chord at random". One of the choices started with a fixed point, another with a fixed diameter the chords had to cross and the third asked whether the midpoint of the chord was inside or outside an interior circle. Not all of the possible chords were permitted by the selection constraints in the first two examples. I suspect the third example did include all possible chords.

----------


## YesNo

Bertrand's problem looks harder than I realized: https://en.wikipedia.org/wiki/Bertra..._(probability)

I don't think the problem is resolved, as I claimed above, by saying there are more chords in one of the three examples. One can assume there is one chord in each. Then what is the probability that its side is longer than the side of an inscribed equilateral triangle. I think you are right in looking at this as a problem of picking a point from an infinite number of points.

----------


## stavrost

my recollection of time is that it began with the universe, (space-time). so the first thing we have to do is wrap our heads around the question: if space-time began with the universe, then how can we ask the question, "what was before?" before implies time, but it didn't exist.

----------


## stavrost

A fascinating idea that i have wondered about is the Schrodinger's cat conundrum. if there has to be an observer before any event can take place, then there had to be an observer before the first two particles o matter inter-acted, did there not? This has led some to theorize that there was intelligence prior to matter, rather than the other way around.

----------


## YesNo

The Schrodinger cat problem keeps bothering me as well. Basically, every time I think I understand what it is supposed to show, I doubt that I have it right. Although I haven't read much lately, I have Amit Goswami's "The Self-Aware Universe" on my desk. He promotes "idealist science" as opposed to "materialist science".

My current view is slightly different from "there was intelligence prior to matter". At the moment, I don't think unconscious matter exists. There is nothing but intelligence. What we see as matter is conscious at a lower level that appears unconscious at the macro level where we view it. 

Of course, I might be completely wrong.

----------


## tailor STATELY

Let me know if this is the wrong place...

"The superfluid Universe": 


> We are used to thinking that quantum physics dominates only the microscopic realm. But the more physicists have learned about quantum theory, the more it has become clear that this isn’t so. Bose-Einstein condensates are one of the best-studied substances that allow quantum effects to spread widely through a medium. In theory, quantum behaviour can span arbitrarily large distances, provided it isn’t disturbed too much.


 https://aeon.co/essays/is-dark-matte...tm_source=digg

... and a toon: http://www.gocomics.com/bloom-county

Ta ! _(short for tarradiddle)_,
tailor STATELY

----------


## Dreamwoven

I'm on this list and though I don't understand all the mathematical stuff the main players here write, I enjoy the discussions between desiresjab and YesNo. Read the thread and you will see what I mean.

----------


## desiresjab

I was offline for half a week with computer issues. The only thing to do was to go to pen and paper. 

What do we know about squares in general? What is one fact of odd squares?

----------


## YesNo

> What do we know about squares in general? What is one fact of odd squares?


I don't know except the obvious in the integers, Z, that one gets an odd number when an odd number is squared and and even number when an even number is squared.

----------


## YesNo

> Let me know if this is the wrong place...
> 
> "The superfluid Universe": https://aeon.co/essays/is-dark-matte...tm_source=digg
> 
> ... and a toon: http://www.gocomics.com/bloom-county
> 
> Ta ! _(short for tarradiddle)_,
> tailor STATELY


I liked the cartoon.

I didn't know that the theories around dark matter had two major variations, those promoting modified gravity and those promoting a cold particle. The superfluid idea is also new to me. Maybe it can bridge the ideas. The modified gravity reminds me of a talk by Rupert Sheldrake where he questioned whether physical constants, in particular G, were actually constant, but changed. If G changed that would be one way to get modified gravity.

----------


## desiresjab

> I don't know except the obvious in the integers, Z, that one gets an odd number when an odd number is squared and and even number when an even number is squared.


I thought a very simple fact might be surprising. All odd squares are 4n+1 numbers. There are no 4n+3 squares; such an animal cannot exist.

----------


## desiresjab

An important fact for anyone trying to learn modular arithmetic has to do with symmetry. In normal arithmetic the negative and positive integers have symmetry across the point zero on the number line. That is, the absolute value of -5, for instance, is equal to 5. In modular arithmetic with primes, this is no longer true-- 

-5≡6 Mod 11. 

Perfect multiples of the modulus have familiar symmetry across zero, but no other residue class does.

Under any prime modulus *p*, start squaring the positive integers _n_ in succession. The series will always begin with the standard squares you are familiar with...1,4,9,16..., until *n* becomes greater than *p1/2* (the square root of *p*), at which point *n2* will wrap around the modulus to some value.

Quadratic reciprocity is about how two moduli wrap around each other under quadratic _pressure_.

----------


## desiresjab

We may further notice that when both *p* and *q* are primes of the species 4n+3, their squares always wrap around the other modulus so that *p2*≡2 (mod *q*) and *q2*≡1 (mod *p*), or vice versa. They will always express this relationship when squared within the modulus of the other. This is a fact of the universe, as inviolate as 2 is the successor of 1.

One always checks first to see if the larger of *p* and *q* simply reduces to a familiar square under the other as modulus. If so, the work is done. Otherwise one starts squaring _n_'s to see if any wraps around to the value of *p*, under *q* as modulus, or vice versa.

----------


## desiresjab

Not surprisingly, then, two 4n+3 primes p and q wrap around each other just as their squares do. Either 4p+3≡1 (mod 4q+3) and 4q+3≡2 (mod 4p+3), or vice versa.

We must remember that if a≡b (mod m), then ap≡bp (mod m).

----------


## desiresjab

It appears my recent posts on 4n+3 only apply if one of the 4n+3 primes is 3 itself. More later. Three is not typical. Or is it?

----------


## YesNo

> I thought a very simple fact might be surprising. All odd squares are 4n+1 numbers. There are no 4n+3 squares; such an animal cannot exist.


That is a more interesting fact than the one I presented. Here's my proof of it since it is not immediately obvious:

Assume there exists an odd square m2 congruent to 3 mod 4 to get a contradiction. There are two cases to consider: m is either congruent to 1 mod 4 or m is congruent to 3 mod 4. 

Consider the first case, m ≡ 1 (mod 4). Then there exists r such that m = 4r + 1 and m2 = (4r + 1)(4r + 1) = 16r2 + 8r + 1 which is congruent to 1. So m is not congruent to 1.

Consider the second case m ≡ 3 (mod 4). Then there exists s such that m = 4s + 3 and m2 = (4s + 3)(4s + 3) = 16s2 + 24s + 9. Since 9 is congruent to 1 mod 4, m is not congruent to 3.

In all cases m is not congruent to 3 mod 4 and since this contradicts the assumption, the assumption is false.

----------


## YesNo

> We may further notice that when both *p* and *q* are primes of the species 4n+3, their squares always wrap around the other modulus so that *p2*≡2 (mod *q*) and *q2*≡1 (mod *p*), or vice versa. They will always express this relationship when squared within the modulus of the other. This is a fact of the universe, as inviolate as 2 is the successor of 1.
> 
> One always checks first to see if the larger of *p* and *q* simply reduces to a familiar square under the other as modulus. If so, the work is done. Otherwise one starts squaring _n_'s to see if any wraps around to the value of *p*, under *q* as modulus, or vice versa.


I would say it is a fact of the axiom system and the set of elements one is using rather than a fact of the universe. One could change the axiom system or the set of elements and get something different. For example, Euclidean geometry need not have much to do with space in the universe around us, but the results would be inviolate facts within the axioms of Euclidean geometry. Only if one can't consistently change the axioms would it be possible to look at the results as relevant to the universe.

I agree that the computationally hard part comes from the wrapping process.

----------


## desiresjab

> I would say it is a fact of the axiom system and the set of elements one is using rather than a fact of the universe. One could change the axiom system or the set of elements and get something different. For example, Euclidean geometry need not have much to do with space in the universe around us, but the results would be inviolate facts within the axioms of Euclidean geometry. Only if one can't consistently change the axioms would it be possible to look at the results as relevant to the universe.
> 
> I agree that the computationally hard part comes from the wrapping process.


You can do these things anywhere. Euclidian geometry would be an outside geometry in some universes. Its laws would remain true, just as the laws of non-Euclidian geometries are true for us.

The wrapping process of moduli can turn a 4n+1 square into a 4n+3 number since, for instance three is a square under some moduli.

I only need to pinpoint the mechanics that force (4n+3)2 to perform its consistent behavior under prime moduli, for the whole thing to shake out. It is a matter of mechanics. A mechanical detail is eluding me so far. That detail will clear up every question. I not only sense this is true, I know damned well it is. There is no doubt, either, that that detail is clearly available in group theory, which is why so many proofs rely on it.

I still believe it is something I can get from peering at the numbers. My investigations are going deeper underground where I need paper and pencil.

Remember the Martinson list for the sums of squares? He mentioned that any prime number generated by a sum of two squares was a 4n+1 number. He mentioned that the table would generate *every* prime of 4n+1 makeup. He did not mention that all odd numbers in the table were also 4n+1 numbers. Close inspection reveals that sums of two squares can only be 4n, 4n+2, or 4n+1 numbers.

On another note of interest. Breaking a large 4n+1 prime into its unique sum of two squares, is every bit as difficult as factoring. I have not delved deeply enough, but I wonder if any of the present encryption systems are taking advantage of this. A new function as the basis means no patent battles.

Anyway, I feel I am very close to the final solution with QR. I know where to dig and I think I know how to do it.

----------


## desiresjab

Here is a curious fact about 4n+3 primes. Look at seven and its squares with regard to other 4n+3 primes.

Because of what we already know, we can state unequivocally that no 4n+3 prime greater than 49 can ever wrap back to be a square (mod 7). Why? Because 72 is a normally occuring number (mod 59) and it will be the square between the two, since there can always and only be one square between 4n+3 primes.

72 is the naturally occuring quadtratic residue of every 4n+3 prime larger than 47, not the other way around, ever.

This idea has its way of working with 4n+1 primes and mixed couples, too. If the larger prime does not reduce back to a sqaure under the smaller prime, then the smaller one will not stretch to a sqaure either, by the rules.

----------


## desiresjab

Therefore, under any moduli if p2<q, p2 is always a natural residue, meaning there is no wrap around by the sqaure.

Among 4n+1 primes and mixed couples, this fact forces them to both be squares, and to never not be mutual squares, since they must act the same way as each other. There is only a question whenever q is < p2, or vice versa. Otherwise, the results are automatic. Of course, we still must explain why they behave the way they do when p2<q. It is always good to see the task more clearly.

----------


## desiresjab

And one further miscellaneous fact. All square n= the sum of the nth and (n-1)th triangular numbers.

----------


## YesNo

> And one further miscellaneous fact. All square n= the sum of the nth and (n-1)th triangular numbers.


This fact makes sense. Geometrically the two triangular numbers are on each side of a diagonal through a square matrix of points missing any lattice point. One triangular set of points will have n lattice points on a side and the other triangular set of points will have n-1 lattice points.

----------


## desiresjab

I state so many things incorrectly on my way to getting them right, that I should not be writing here on the subject of QR until I finish with it.

I think I am going to find that understanding why it works will prove a lot more difficult than understanding the mere mechanics of it. Its outside mechanics are not so bad, but what makes its guts work is a lot tougher to see.

----------


## YesNo

One reason to continue writing is to help clarify it for yourself. I don't mind reading it. It gives me something to think about. If it wasn't for you I wouldn't be thinking about any of this.

I looked at Eisenstein's proof on Wikipedia: https://en.wikipedia.org/wiki/Proofs...ic_reciprocity

Pieces of it are starting to click. I don't understand Eisenstein's lemma yet, but I think I see the geometric point. The goal is to show that the number of lattice points within AXYW has the same even or oddness (in the same residue class mod 2) as the points in ABC with the x coordinate an even number. 

The reason the even x coordinates are needed is because he will consider the lattice points in XBCY and note that the lattice points in XBCZ are even because q-1 is an even number, each column has an even number of lattice points. Since XBCY and YCZ partition an even number of lattice points into two sets, the two sets have the same even or oddness. Then one can flip XCZ onto AXY. Now we already have the lattice points under even x coordinates. This flipping gives us the lattice points under odd x coordinates. So we have them all. Do this for AYW and we have all the lattice points, all (p-1)(q-1)/4 of them, in AXYW which is what we wanted.

The only piece missing is Eisenstein's lemma which is in the article, but I haven't finished understanding it yet.

Edit: I think I understand the lemma. It is interesting how it also uses the fact that when an even number is represented as the sum of two other integers, those two other integers are both either even or both odd. It cannot be the case that one is even and the other odd because their sum is even.

----------


## desiresjab

Yes, identical parity is a matter of (mod 2) with those columns. This is one of the first things Eisenstein makes clear.

P vs Q, when both are 4n+3 species. If the larger p wraps down to a square (mod q), then numbers under the smaller q may not square and then wrap to a square number (mod p).

11 (mod 7) wraps down to 4, a square (mod 7). Remember, there are only three different squares (mod 7), 1, 2 and 4. Any number squared (mod 7) wraps down to one of those three numbers.

Conversly, since 7 is >the square root of 11, then 4, 5, 6 7, 8, 9 or 10 squared would have to wrap back to 7, but something prevents them. I am now looking for exactly what prevents them from having reciprocity instead of irreciprocity. My goal is to see why two 4n+3 primes behave toward each other as they do. What prevents them from behaving as two 4n+1 primes or a mixed couple do? I feel I am on the right vein, looking for the exact spot to sink my pickaxe.

Your mention a few posts back that Gauss comes up with smaller numbers, but with correct parity, of course, was a good shout out.

It is not square vs square, in my current vision, but square vs some multiple of the other prime, which determines the precise behavior. This information is probably not in Eisenstein's rectangle, but I will not say for sure. There may be some reflection of it, but I think not as well, since there is a limit to the information the rectangle can contain.

----------


## YesNo

If one looked at Eisenstein's lattice points in the rectangle AXYW, the only time that rectangle will have an odd number of lattice points, as it does in the p = 7 and q = 11 example, is when both p and q are congruent to 3 mod 4. Otherwise AXYW will have an even number of lattice points. Also the number of lattice points in AXYW is equal to the sum of the number of lattice points in the triangle AXY and the number of lattice points in the triangle AYW.

If we have an odd integer that is the sum of two other integers, then one and only one of those other integers can be odd, that is, one and only one of (p|q) and (q|p) can equal -1 and be a quadratic non-residue. The other has to be a quadratic residue.

----------


## desiresjab

> If one looked at Eisenstein's lattice points in the rectangle AXYW, the only time that rectangle will have an odd number of lattice points, as it does in the p = 7 and q = 11 example, is when both p and q are congruent to 3 mod 4. Otherwise AXYW will have an even number of lattice points. Also the number of lattice points in AXYW is equal to the sum of the number of lattice points in the triangle AXY and the number of lattice points in the triangle AYW.
> 
> If we have an odd integer that is the sum of two other integers, then one and only one of those other integers can be odd, that is, one and only one of (p|q) and (q|p) can equal -1 and be a quadratic non-residue. The other has to be a quadratic residue.


This all tue, and these are the same things I keep repeating to myself. But I know there is a mechanism in back of it all which prevents two 4n+3 primes from having the same "character." I am still in search of the precise mechanism and now have only decent confidence that I will succeed.

There is a shortcut I am not sure you know. Suppose we have a Legendre symbol (p/q), and q is much larger. Instead of squaring numbers (mod q) to see if one is a square, we are to always permitted to invert the symbol to (q/p). Then all we have to do is reduce. We only need one prime's quadratic character to know the other's, and this works equally with both species. One hundred percent legal.

Another legal move takes advantage of the symbol's mulipicative properties. if p=a(b), then (a/q)(b/q) will always give the right answer.

----------


## YesNo

There seems to be more going on here as you mention.

I am aware of the complete multiplicative nature of the Legendre symbol and that one can reduce the larger prime mod the smaller one. That reduced number will not likely be a prime so one would have to factor it. Also (a2|p)=1, so these even powers can be discarded.

I am looking at Vanden Eynden's "Number Theory". He proves QR using Gauss's methods. I'll see if I can find something more enlightening by using that proof.

----------


## desiresjab

> There seems to be more going on here as you mention.
> 
> I am aware of the complete multiplicative nature of the Legendre symbol and that one can reduce the larger prime mod the smaller one. That reduced number will not likely be a prime so one would have to factor it. Also (a2|p)=1, so these even powers can be discarded.
> 
> I am looking at Vanden Eynden's "Number Theory". He proves QR using Gauss's methods. I'll see if I can find something more enlightening by using that proof.


By reducing, I only mean this: wrap the bigger number around the smaller modulus until the remainder is revealed, the way one can reduce 31 to 1 (mod 3). In our manipulations, it is always legal to invert the Legendre symbol, then reduce, since it is generally easier to "reduce" than to start squaring numbers to see if a square appears under the other modulus. I hope that was clear.

What I am looking for is probably not available, for I have never seen it mentioned that someone was explaining the mechanism of QR, they were only proving it. Gauss proved the law itself, and many others have since, but no one has explained it that I know of. If this is pulled off on a literature forum, it will be a coup for the ages.

----------


## YesNo

We might as well try explaining it to each other. Whether it is quantum physics or number theory, it probably doesn't make complete sense even to the people who know what they're talking about.

I think I understood what you meant by reducing the top prime in the Legendre symbol, (p/q). One might as well start with p > q and then find what p is congruent to modulo q. That won't likely be a prime any more, but one can try factoring it to simplify the calculation even more.

After looking at Vanden Eynden's text, I found this relationship which might be interesting and is part of his (Gauss's) proof of QR.

Let p and q be odd primes with p > q and p is congruent to q mod 4. Then there exists some integer a such that p = q + 4a. Now that last equation implies the existence of three other congruence relationships.

1) If p = q + 4a, then p ≡ q (mod 4a). This is just the original congruence including a.
2) If p = q + 4a, then p = 4a + q and so p ≡ 4a (mod q). Now, 4a is linked to p via q.
3) If p = q + 4a, then -q = 4a + p(-1) and so -q ≡ 4a (mod p). Now 4a is linked to -q mod p.

In this way 4a is the link between p and q. One can get the QR result for the cases when p ≡ q (mod 4) by considering the following:

(p/q) = (4a/q) = (4a/p) = (-q/p) = (-1/p)(q/p)

The first part comes from p ≡ 4a (mod q). The second part was proved in the book and is non trivial, but can be assumed for the moment. The third part comes from 4a ≡ -q (mod p) and the last part comes from the multiplicative property of the Legendre symbol.

I'm not sure if this helps any, but it seemed interesting to me.

----------


## desiresjab

> We might as well try explaining it to each other. Whether it is quantum physics or number theory, it probably doesn't make complete sense even to the people who know what they're talking about.
> 
> I think I understood what you meant by reducing the top prime in the Legendre symbol, (p/q). One might as well start with p > q and then find what p is congruent to modulo q. That won't likely be a prime any more, but one can try factoring it to simplify the calculation even more.
> 
> After looking at Vanden Eynden's text, I found this relationship which might be interesting and is part of his (Gauss's) proof of QR.
> 
> Let p and q be odd primes with p > q and p is congruent to q mod 4. Then there exists some integer a such that p = q + 4a. Now that last equation implies the existence of three other congruence relationships.
> 
> 1) If p = q + 4a, then p ≡ q (mod 4a). This is just the original congruence including a.
> ...


It is interesting. It is close to what I was doing last night on my own. Another interesting fact that could easily be overlooked is that when you square numbers mod (m) and every entry is duplicated once, it is an odd number and an even number which produce the same result in every case. What that means is factorization is not unique in this language. 52 and 62 both equal 3 (mod 11).

I was over at a math site recently and asked what they know over there. No responses yet, but I am sure they will not do as well as we are doing here. They love to talk a big game over there about advanced calculus and other impressive topics, but ol' QR is quite enough to stop all their chatter, espesially when I told them I was not interested in restatements of the law or facts surrounding it. Those surrounding facts I am interested in, but they don't need to know that, since we can get that right here and discover those facts for ourselves. I don't need them blabbimg restatements forever because they don't know what else to do. Maybe someone over there will come through yet. Don't hold your breath.

When it comes to predicting whether an overlapping square will be even or odd, forget about it, at least so far with what we know.

----------


## desiresjab

Oh, and by reducing I mean simply to carry out the division mentally and find the remainder to see if it is a square. 31 reduces to 1 (mod 3), for instance. Specifically we could say, 10p+1=q, or (q-1)/10=p. This is in the neighborhood of what you were saying above.

A relevant concept I came up last year is "highly even" and "barely even" numbers. 4n+1 numbers minus one are all highly even and 4n+3 numbers minus one are all barely even. A barely even number is divisible by 2 only once, a highly even more than once. This falls right out of something called the ruler function, which I also discovered in my investigations.

These two types of numbers are obviously germane to QR. The higher degree of evenness of Eisenstein's rectangle when both triangles have an even number of lattice points must certainly be an important fact. When QR =-1, the overall rectangle has a downright paucity of factors of two, managing onlyfour of them.

The real problem with the above idea is that two (4n+1)-1 numbers which by definition have a higher degree of evenness can reject each other, so to speak, and mutually not be in each other's quadratic residue set.

I really enjoy our discussion here. I am quite lucky to find even one able volunteer willing to go along on this perilous mission with me. The biggest problem I have is that I am wearing myself out thinking about it.

The layout of this forum is actually superior to the math forum I visited when it comes to typing math. This forum at least allows exponents. The other forum still uses dumb up arrows for powers.

----------


## desiresjab

Damned duplicate posts!

----------


## desiresjab

the 7x11 rectangle with 60 interior lattice points is only divisible by 2 twice. Another conjecture gone awry, perhaps. So far, no matter what I try to connect the behavior to, it only looks promising for a while.

But wait. Are all such rectangles whose dimensions are (4j+3)(4k+3) divisible by two only twice? Is that the nature of them? Yes, of course. What else would I be talking about? (Beat my own forehead). The train is still on track. One wheel anyway.

This paucity of 2's may lead directly to the mother lode, the reciprocity mechanism.

----------


## Dreamwoven

> Damned duplicate posts!


Well, its better than the problem you had a while back, of disappearing posts!

----------


## YesNo

> It is interesting. It is close to what I was doing last night on my own. Another interesting fact that could easily be overlooked is that when you square numbers mod (m) and every entry is duplicated once, it is an odd number and an even number which produce the same result in every case. What that means is factorization is not unique in this language. 52 and 62 both equal 3 (mod 11).


It looks like 3 is a quadratic residue because both 5 and 6 can be squared to give 3 mod 11. But then we know half of the elements will be quadratic residues and the other half non-residues. So there should be two elements when they are squared that give 3.

Another way of looking at this is to ask what solutions are there to the following polynomial: x2 - 3 = 0 (mod 11) There should be 2 solutions and there are.

When we are looking at the residue classes mod 11, we aren't looking at integers any more. Instead we are working with equivalence classes of integers, or sets of integers. The integers would be in the integral domain Z, where there are primes because multiplicative inverses for all elements do not exist, but these residue classes are in the finite field Z/Z11 where there aren't any primes anymore. In the finite field, the non-zero elements all have multiplicative inverses and so they would be units like 1 and -1 are in the integers Z. For example let n, not equal to 0, be an element in Z/Z11, then since n11-1 = 1 (mod 11), the multiplicative inverse of n is n9 (mod 11).





> I was over at a math site recently and asked what they know over there. No responses yet, but I am sure they will not do as well as we are doing here. They love to talk a big game over there about advanced calculus and other impressive topics, but ol' QR is quite enough to stop all their chatter, espesially when I told them I was not interested in restatements of the law or facts surrounding it. Those surrounding facts I am interested in, but they don't need to know that, since we can get that right here and discover those facts for ourselves. I don't need them blabbimg restatements forever because they don't know what else to do. Maybe someone over there will come through yet. Don't hold your breath.


I got an account on https://math.stackexchange.com/ to get more information as well. It is good to have questions. The available answers aren't all the answers. Although QR is useful, what we really want is a quick way to evaluate (p/q) without having to consider (q/p). Quadratic reciprocity allows us to evaluate the one that is easiest to calculate, but perhaps there is a faster method. That sounds to me like what you are looking for.




> When it comes to predicting whether an overlapping square will be even or odd, forget about it, at least so far with what we know.


That is an interesting question. One doesn't have to take the representatives for the equivalence classes from {0,1,...,p-1}. They could come from {-(p-1)/2,...,(p-1)/2}. The evenness and oddness of the result might change when using that set.

----------


## desiresjab

> It looks like 3 is a quadratic residue because both 5 and 6 can be squared to give 3 mod 11. But then we know half of the elements will be quadratic residues and the other half non-residues. So there should be two elements when they are squared that give 3.
> 
> Another way of looking at this is to ask what solutions are there to the following polynomial: x2 - 3 = 0 (mod 11) There should be 2 solutions and there are.
> 
> When we are looking at the residue classes mod 11, we aren't looking at integers any more. Instead we are working with equivalence classes of integers, or sets of integers. The integers would be in the integral domain Z, where there are primes because multiplicative inverses for all elements do not exist, but these residue classes are in the finite field Z/Z11 where there aren't any primes anymore. In the finite field, the non-zero elements all have multiplicative inverses and so they would be units like 1 and -1 are in the integers Z. For example let n, not equal to 0, be an element in Z/Z11, then since n11-1 = 1 (mod 11), the multiplicative inverse of n is n9 (mod 11).
> 
> 
> 
> 
> ...


Huh? I am not looking for a faster way to do anything. I have said so many times what I am looking for that I do not feel like saying it again right now.

Something I said before is coming true. Hardly anyone understands quadratic reciprocity. No one is answering on the math forum I visited. They love to showoff. If anyone knew, they would surely answer. I feel partially vindicated in that the problem truly is difficult. Learning enough about it to pass a course in elementary number theory is not that hard, but knowing what I am asking--now that is hard.

To address something you said: In fact, the parity would change when we go to negative representatives of the residue system. Good observation.

----------


## YesNo

What question did you ask them?

----------


## desiresjab

> What question did you ask them?


I said I was only intersted in the mechanism itself that made 4n+3 primes behave as they do toward each other in QR, not restatements of the law or facts surrounding it, and that I insisted on seeing this in terms of the numbers themselves rather than a higher abstaction coming out of group symmetries and the like. I admitted this might be like standing outside a forest with a flashlight looking for something hidden behind a tree, but still I asked for an explanation in terms of the numbers. 

I suspect that the basis of some higher abstraction proofs in abstract algebra is to show an anti-symmetry between the two subrings of squares, or something along those lines. Those proofs look down from above, I want to look from below in the guts of the machine.

In the meantime I have developed what may be a valid conjecture, that the two triangles WAY and YAX in Eisenstein's rectangle will contain an identical number of lattice points when p and q are twin primes, and only then. This may seem obviious but be very hard to prove. It is an ad hoc conjecture, just a problem, like so many that Erdos proposed, perhaps not important, but a fact nonetheless and an interesting challenge to recreate upon.

A weakness of the conjecture is that I suspect that the outside members of prime triplets which are large enough might also do this. Okay, I leave out the_ if and only if_ part of the conjecture.

----------


## desiresjab

After merely checking all of my experiments on graphing paper, I am positive the conjecture could be extended to all prime triplets from at least the second such triplet to infinity with a stipulation. That stipulation is that two of the prime triplets are 4n+1 numbers, because when two 4n+3 primes clash we know they will obviously have different numbers of lattice points in the two triangles because one will be even and the other odd. As long as our prime triplets contain two 4n+1 numbers, we will be okay, and I am pretty sure that conjecture would hold.

----------


## YesNo

> In the meantime I have developed what may be a valid conjecture, that the two triangles WAY and YAX in Eisenstein's rectangle will contain an identical number of lattice points when p and q are twin primes, and only then. This may seem obviious but be very hard to prove. It is an ad hoc conjecture, just a problem, like so many that Erdos proposed, perhaps not important, but a fact nonetheless and an interesting challenge to recreate upon.
> 
> A weakness of the conjecture is that I suspect that the outside members of prime triplets which are large enough might also do this. Okay, I leave out the_ if and only if_ part of the conjecture.


That sounds like an interesting problem. What do you mean by prime triplets? Three consecutive primes?

----------


## desiresjab

> That sounds like an interesting problem. What do you mean by prime triplets? Three consecutive primes?


There are two different brands for p, q and v, {n, n+2, n+6} and {n, n+4, n+6}.

The conjecture is that as long as at least two in either set are of the form 4n+1, they will obviously express not only the same parity with each other in all three combinatory pairs, but also have the same number of lattice points in WAY and YAX, as seen from Eisenstein's rectangle represented in Wiki-pejia.

It seems intuitively clear, but I don't know how to prove it. I am not saying that is the maximum boundary condition, either. That is, there may be wider gaps than six which allow for an identical number of lattice points in both triangles. It depends on only two things--the absolute cardinality of p,q and v, and whether only one of them is a 4n+3 number.

The larger the absolute magitude of the triplet, the closer any two of them compared to each other will be to the ratio of a square 1/1. Since this holds for even small triplets where the ratio is not as close to 1, it must be true for ones of larger absolute magnitude that meet the only other condition.

What I suspect could be proven with analytical methods is that for any gap, as wide as one wants to make it, WAY and YAX can still produce the same number of lattice points, as long as the primes involved are large enough and both are not type 4n+3. Similar things have been proven about primes and other intervals. This one feels intuitively right. It might well have been already proven, or at least conjectured.

----------


## YesNo

After sleeping on your problem related to twin primes, I realized I couldn't solve it. 

It would be easy to show that the parity in the two triangles of lattice points are the same, but not that there are exactly as many lattice points in both triangles. In the triplets, having two 4n + 3 primes would make the number of lattice points in the rectangle odd and so the parity in the triangles would be different.

One way of solving it might be to go through Eisenstein's construction using a prime p = 4m + 1 and then seeing if the lattice points remain the same using 4m + 3. I realize I don't understand Eisenstein's proof well enough to do this easily.

Also one might be able to generalize this to any two numbers whose difference is 2. Some of the points might be on the diagonal line separating the two triangles, but then they would either not be counted or counted in both triangles.

If you haven't published this problem, it might be worth doing so say on places like math.stackexchange.com. An interesting question is more valuable than a quick solution.

----------


## YesNo

I was thinking more about the number of lattice points in the two triangles. Intuitively, it would seem that there should be the same number of lattice points in both triangles because the diagonal line divides the rectangle into two equal area triangles. With two 4m + 3 primes the total number of lattice points in the rectangle is odd, so one of the triangles should have an odd number of lattice points and the other an even number. There has to be a difference of at least 1 for those primes.

Do you have an example (two primes p and q) where the difference in the number of lattice points is greater than 1?

Maybe the more basic question is to find pairs of primes that divide the lattice points in the two triangles so that the difference in the number of lattice points in each triangle gets larger.

----------


## desiresjab

> I was thinking more about the number of lattice points in the two triangles. Intuitively, it would seem that there should be the same number of lattice points in both triangles because the diagonal line divides the rectangle into two equal area triangles. With two 4m + 3 primes the total number of lattice points in the rectangle is odd, so one of the triangles should have an odd number of lattice points and the other an even number. There has to be a difference of at least 1 for those primes.
> 
> Do you have an example (two primes p and q) where the difference in the number of lattice points is greater than 1?
> 
> Maybe the more basic question is to find pairs of primes that divide the lattice points in the two triangles so that the difference in the number of lattice points in each triangle gets larger.


Examples are easy to come by for 5 and 11, the triangles have 6 and 4 lattice points. The difference always has to be at least 2 for examples of the same parity.

For primes 5 and 15, they are 9 and 7.

It has not failed for twin primes or correctly composed prime triplets, and I have not found it to be true anywhere else.

* * * * *

What I have done with regards to QR is moved on to abstract algebra. Fortunately for myself, I can relate a great deal of what they are saying to what I already know of groups, rings and fields from number theory, otherwise I would be lost. In lecture 27 of 38 they finally got real close to QR.

----------


## YesNo

> Examples are easy to come by for 5 and 11, the triangles have 6 and 4 lattice points. The difference always has to be at least 2 for examples of the same parity.


That's good to know.




> For primes 5 and 15, they are 9 and 7.


Although 15 is not prime, I would think this should work for numbers in general. Some of the lattice points could be on the diagonal line if one doesn't use primes.




> It has not failed for twin primes or correctly composed prime triplets, and I have not found it to be true anywhere else.


I wonder if it is possible to pair the columns of lattice points. For example, the column where x = 1 would pair with the column where x = (p-1)/2. I suspect the number of lattice points from just these two columns would be the same in both triangles. Then proceed by induction, or some other means, to look at the other column pairs.




> What I have done with regards to QR is moved on to abstract algebra. Fortunately for myself, I can relate a great deal of what they are saying to what I already know of groups, rings and fields from number theory, otherwise I would be lost. In lecture 27 of 38 they finally got real close to QR.


Which text are you using? Youtube may also have interesting reviews of algebra.

----------


## desiresjab

> That's good to know.
> 
> 
> 
> Although 15 is not prime, I would think this should work for numbers in general. Some of the lattice points could be on the diagonal line if one doesn't use primes.
> 
> 
> 
> I wonder if it is possible to pair the columns of lattice points. For example, the column where x = 1 would pair with the column where x = (p-1)/2. I suspect the number of lattice points from just these two columns would be the same in both triangles. Then proceed by induction, or some other means, to look at the other column pairs.
> ...


I meant 5 and 17. I believe there was one pair of primes that gave 9 and 6, but I cannot remember what the[pair was.

Presently, I do not have an abstract algebra text. I am doing everything off the internet.
In case you want a link to the thirty-eight lectures, I will provide it below. The instructor's name is Gross from Harvard, and this guy is exceptional.

Abstract algebra is a whole new language. They operate at a very high level of abstraction. If you miss somethibng, you have to go back. Everything is dependent on what is already supposed to have been learned. Very abstract thinks like manipulating and untangling compositions of functions. You have to know a kernel from an image, you constantly have to check to make sure what you are working on is actually associative or commutative, et al.

He may get to QR the next lecture I have up. What I can tell you is the solution wull be buried under even more layers of abstraction than I imagined. 

I try to go too fast--six or seven lectures per day or more. That guarantees I will have to go back and do it agin. But by going ahead to the end, I know exactly what I should be concentrating on the second time around. The method is not as faulty as it seems.

https://www.youtube.com/watch?v=TsLW...=10#t=6.688125

----------


## YesNo

I think I found it: https://www.youtube.com/watch?v=EPQg...8AC5CABC1321A3

----------


## desiresjab

> I think I found it: https://www.youtube.com/watch?v=EPQg...8AC5CABC1321A3


That is the right guy. Benedict Gross. I think he is chairman of the department of mathematics at Harvard. He never does get to QR, though there is a fair amount of material about squares, since that was a big subject of Gauss. The best minds today are still working on the stuff developed out of Gauss, who set the course for everything in the field and has never had a conjecture overturned anywhere in math or science but many verified.

Instead of numbers, these people study equivalence classes of numbers and of polynomials, both real and complex, through structures like groups, rings and fields. Every structure is carefully defined, and one has to know them apart and be aware of the criteria for being in one or the other during any process.

* * * * *

If a ground level view of the mechanism for 4n+3 primes in QR is possible, maybe someday I will find it. The tools and results of abstract algebra are often macroscopic but probably capable of focusing down to the particular mechanism responsible, too.

----------


## desiresjab

No sooner do I give up than I see what seems to be enough. It is like I said before, if p is a 4n+3 prime, p-1 will be divisible by 2 only once. The mechanism is best described for illustration in my own terminology as the difference between "barely even" and "highly even" even numbers. On one of those classes is where you end up once you subtract 1 from p.

I had to look at it long and painfully to finally confirm that what was staring me in the face was the actual mechanism itself. That is why I am dumb.

All that remains is to verify and confirm that one understands how we got to the point of (p-1)/2 in the first place. That is incredibly easy, already done. I am finished.

----------


## desiresjab

Let me sum up, try to anticipate any questions, and move these results back into a discussion of general cosmology. 

The whole giant digression involving QR took place because I wanted to take the proposition that God could not make a universe where 2 is not the successor of 1 to a higher level, on a road to what might even include all of mathematics, but at minimum enlarging the statement to _God could make no universe where the statements of mathematics would be false, from at least the founding axiom through to somewhere beyond the law of quadratic reciprocity_. Anything theoretically true here would be theoretically true in any other universe as well, and vice versa.

_No such universe is imaginable_, might be a less religiously provocative way of stating it. Now that I can state it, there is only to wrap up the discussion of QR and return to cosmology, where the above will be one of the postulates of my personal philsophy within cosmology--mathematical cosmology, I suppose.

A gem that comes out of this is that we have the capability to understand any universe, and any universe of any description, no matter how different from our own, would have the capability of understanding our universe. That capability, consisting of mathematics and its growing extensions, would remain invariant across universes, while retaining its elastic variability and variety.

* * * * *

Eisenstein's P by Q rectangle must be viewed as a scaling object, just as if we had two gears intermeshing with radii corresponding to the lengths of the two 4n+3 primes, it expresses their ratio. I think his rectangle must be the simplest scaling object for this problem.

As I suspected a while back, the problem is a paucity of 2's when both primes are 4n+3. Once the rectangle is divided into four quadrants, there is no 2 left over, in other words, each quadrant contains an odd number of lattice points. That paucity of 2's forces the diagonal to cut the smaller rectangle of WAYX into two unequal halves of different parity.

The dimensions of the interior rectangle within ABCD containing only interior lattice points is 

(p-1)(q-1). 

This is none other than Euler's phi function Φ(p), the measure of how many numbers less than p are prime to it. There are sixty lattice points in the interior of ABCD. Points on the perimeter are not prime to one or other prime, so cannot be included. As usual, (mod 0) not allowed.

Once odd primes have the extra freedom of at least one more factor of 2, the problem is resolved, and the two primes are forced to act together, forced to the same parity because their smaller rectangle contains an even number of lattice points.

At the beginning, Eisenstein calculates the number of even lattice points in ABCD. He knows if they are aysmmetrical in the two halves of ABCD, so will the odd lattice points be, to make up for the discrepancy. The number of even lattice points in ABCD is of course thirty. His initial additive method was good enough for the parity of p, (-117), but not good enough to obtain q's parity.

(P-1/2)(q-1)/2) is indeed the total number of points in WAYX, 1/4 of the total points in ABCD, and 1/2 the number of even points.

-1(5)(3) is -1(60/4) or -1Φ(p)/4 after all. So Eisenstein's exponents are correct not only in their parity but they are also the "appropriate" exponents in that they are a factorization of the number of lattice points in WAYX. More importantly, (p-1/2) and (q-1/2) are the number of quadratic residues for each prime, we already know.

Now, that is everything about Eisentein's rectangle. Does it really prove quadratic reciprocity?

Yes, here is why I think so. Since (p-1/2) and (q-1/2) are a simple count of the number of quadratic residues for each prime, and their product is used as an exponent to count -1 back and forth from negative to positive, both are represented, and their product forms the dimensions for the rectangle of inner points of WAYX, and the exponentiation's result can only be negative when both (p-1/2) and (q-1/2) are odd.

For me that quite settles the issue, not only for 4n+3 prime pairs but for any prime pair, excluding 2, which I can recite the forumla for but have not yet reasoned out. My focus has been the 4n+3 primes, knowledge of which I hoped would illuminate the triggering mechanism for other prime pairs as well, which has happened for me.

4n+3 primes have to be the same thing in any universe. The only fundamental difference between 4n+1 and 4n+3 numbers is the degree of evenness when you subtract 1 from them.

* * * * * 

Now that we all know this fundamental fact of numbers and how it constrains universes, we can proceed with broader cosmology again.

----------


## YesNo

> A gem that comes out of this is that we have the capability to understand any universe, and any universe of any description, no matter how different from our own, would have the capability of understanding our universe. That capability, consisting of mathematics and its growing extensions, would remain invariant across universes, while retaining its elastic variability and variety.


I agree that given the initial axioms, the mathematical results are invariant across all possible universes. That there exist other universes can be assumed based on knowing that our universe is not eternal. In particular, the big bang shows it had a beginning.

Edit: Regarding the "barely even" number, those having only one factor of 2, there is a concept called "singly even" or "oddly even" that matches that: https://en.wikipedia.org/wiki/Singly_and_doubly_even

----------


## desiresjab

> I agree that given the initial axioms, the mathematical results are invariant across all possible universes. That there exist other universes can be assumed based on knowing that our universe is not eternal. In particular, the big bang shows it had a beginning.
> 
> Edit: Regarding the "barely even" number, those having only one factor of 2, there is a concept called "singly even" or "oddly even" that matches that: https://en.wikipedia.org/wiki/Singly_and_doubly_even


I like highly even and barely even better. I modeled it after Ramanujan's idea of highly composite numbers, which assigns an index to each number to rank how composite it is. With the ruler function I can calculate and assign any even number its degree of evenness, without calculating those that precede it. 

A few last details to focus up. 

For the 7x11 rectangle, Eisenstein's exponents of 5 and 3 give a correct factorization of the lattice points in WAXY, but for the 7x3 rectangle where there are three points in WAXY, the exponent is 3, which does not factor the lattice points of WAXY, but sums them. Both the additive and the multiplicative coincidences may have been just that. I am not worried about that, it is minor and will sort itself out.

I saw the mechanism behind the behavior of 4n+3 primes from ground level. Mission accomplished. But not quite. For of slightly more consternation is that I understand the proof but not why it proves QR. True, WAXY is a *quad*rant, but the last bit of "seeing" has not clicked into place as to how this proves whether or not p or q are in each other's quadratic residue sets.

Can I not simply say Φ/4, where Φ is Euler's totient function, divided by four will always give the correct number of points in WAXY, and be done with it? I believe I can. This has to work for both species.

I understand everything about this proof except why it proves what it proves.

----------


## desiresjab

In the meantime, I see clearly that Φ/4, where Φ is Euler's totient function, will always give the correct number of lattice points in WAXY. This may work for 4n as well as 4n+2 numbers. I find this connection with the totient function highly intriguing.

[Φ(pq)]/4, in other symbols. Φ(pq) is what the RSA encryption system is based on. It is extremely hard to find Φ(pq) unless you know what p and q are. But you only know what pxq is, which means you have to find its factors, and that is almost impossible for huge, "barely composite" numbers with today's computing technology and math techniques.

----------


## desiresjab

Here is a very interesting near hit for the rectangle 11x7.

Φ(77)=60, Φ(60)=16, Φ(16)=8.

If Φ(60) had been 15 instead of 16...? Well, that is how research begins, I guess. But what if there is a pattern in the descending chain of Φ's anyway that only more investigation will ferret out? It should be easy to look at other examples, but my brain is shutting down right now. More later.

----------


## desiresjab

Why, looky here.

Φ(7x3)=12→ Φ(12)=4→ Φ(4)=2.

The reduction by a chain of Φ's instead of by factors of 2, worked out exactly as before, probably close enough to warrant even further investigation. 

The second Φ again equaled one more than the total number of lattices in WAXY; the third Φ equaled the greatest number of lattices in either of the triangles. Uh-oh, now we have to go on.

Φ(19x23)=396

Φ(396)=120

Φ(120)=32.

Here everything goes quite amiss and we see we are on a deadend using the chain of Φ's. It was merely another coincidence. But Φ/4 is not a coincidence. That baby is real and will get you the right parity and the correct number of points in WAXY, which in this case is 99.

Φ/4 rules, Φ of Φ of Φ does not. Now we know for sure. I do not think 120 is one more than 99, and I do not think 32=49. That case closed.

----------


## desiresjab

It seems wonderful, it is wonderful. Why do Φ and these squares interact at all?

What do Φ(pq)=(p-1)(q-1)=a, or for that matter pq=z itself, and rt=d, the distance formula, have to do with each other? They are in some kind of equivalence class with V=IR, the voltage formula, and the equations for uncountable (literally) other phenomena. In abstract algebra they are called isomorphisms. That means they are just re-lablings of each other. They have the same _group_ characteristics. A popular way of saying it is: they are the _same under the hood_. Their matrices look identical except for the difference in labels. Isomorphism. Of all those creepy _isms_ in abstract algebra they are the easiest to see clearly. Just when you think you see automorphism or homomorphism clearly, they add some more bugaboo to it or manage to make them unclear by other means.

(p-1)(q-1) just happens to belong to this equivalence class of functions, linear relationships known as _directly proportional to_. Something like that. One cannot help noticing the similarity of the equations for work done by two workers doing a job and resistance from two resistors in parallel in electronics, as another example. The phenomena of the world around us express certain classes of functions replicated endlessly with only different labels _under the hood_. A few basic classes dominate much of the action, it seems to me at this moment.

I guess I will continue to loiter around the QR lobby until I see why Eisenstein has proved it. I see everything else about his beautiful proof, it has illuminated the operative principle of both 4n+3 primes and 4n+1 primes in QR by rectangular illustration, it provides exact numbers, I might as well hang around to see the reason it does what it purports to do.

I said I would be happy if I could see the general mechanism, and I have, but I guess I lied, for I am not satisfied now until I can see what makes Eisenstein's rectangle a proof of QR. All of it is right on the page and obvious the way math always is, a grand tautology, so eventually it will pop out clearly, the way the mechanism did, after my staring at it ignorantly forever. At the next moment of revelation I am bound to see more clearly something I have already stated, if past is precedent.

----------


## YesNo

> But Φ/4 is not a coincidence. That baby is real and will get you the right parity and the correct number of points in WAXY, which in this case is 99.


If p and q are distinct primes then Φ(pq) = Φ(p)Φ(q) = (p-1)(q-1). All we need is to divide by 4 to get the number of lattice points. So I agree Φ/4 will always equal the number of lattice points in Eisenstein's rectangle.

----------


## YesNo

> said I would be happy if I could see the general mechanism, and I have, but I guess I lied, for *I am not satisfied now until I can see what makes Eisenstein's rectangle a proof of QR*. All of it is right on the page and obvious the way math always is, a grand tautology, so eventually it will pop out clearly, the way the mechanism did, after my staring at it ignorantly forever. At the next moment of revelation I am bound to see more clearly something I have already stated, if past is precedent.


It is good you are not satisfied. Otherwise you would stop looking.

One of the problems with both Gauss's and Eisenstein's proofs is that they do not directly show why (p-1)(q-1)/4 should be the exponent. The direct proofs are in the Gauss Lemma and the Eisenstein Lemma. Those exponents in those lemmas are different from (p-1)(q-1)/4. But they want the exponent to be (p-1)(q-1)/4 because that is computationally easier to work with and so they transform their results so that the parity is preserved, which is all I think they are interested in.

At least that is how I see it at the moment.

----------


## desiresjab

> It is good you are not satisfied. Otherwise you would stop looking.
> 
> One of the problems with both Gauss's and Eisenstein's proofs is that they do not directly show why (p-1)(q-1)/4 should be the exponent. The direct proofs are in the Gauss Lemma and the Eisenstein Lemma. Those exponents in those lemmas are different from (p-1)(q-1)/4. But they want the exponent to be (p-1)(q-1)/4 because that is computationally easier to work with and so they transform their results so that the parity is preserved, which is all I think they are interested in.
> 
> At least that is how I see it at the moment.


I feel that 3 & 5 must be the natural exponents. The only way to get lower is to have 1 & 3, like the 7x3 rectangle. Also there is the connection of Φ to them. 

I do not believe the sum of the two exponents has much bearing. The exponents using Eisenstein's algorithm are 1 & 3 for the 3x7 rectangle. Those exponents do the job, Φ/4 does part of the job even faster, but it does not distinguish between p and q, as the exponents do.

Something just became slightly more focused in my head that is still blurred. If (p-1) & (q-1) were the same number and we multiplied them together we would be squaring, in which case we would merely add their exponents, wouldn't we? This is why adding exponents cannot work here. But the symmetries between squaring and what we are doing is intriguing. One feels that box might contain some secrets.

Several things we know for sure: Two sure ways to the correct result are Eisenstein's algorithm and Φ/4. It just happens that the intermediary step Φ/2 is the number of quadratic residues of each prime, and the final step the dimensions of the rectangle inside WAXY the lattice points sit on.

I am beginning to see that QR is as centrally connected as its reputation says it is. These connections are only the tip of the iceberg, QR reaches into everything. No doubt there are proofs that exploit the connection to Φ, just as there are trigonometric proofs, proofs that exploit the Pythagorean theorem, proofs that exploit Fermat's little theorem, and all kinds of proofs from group theory and abstract algebra, including at least one vector proof.

It will fall into place, but not without more staring and effort. I am hoping for a few weeks or less.

Something about 5x3 is really nagging me.

----------


## YesNo

> Something just became slightly more focused in my head that is still blurred. If (p-1) & (q-1) were the same number and we multiplied them together we would be squaring, in which case we would merely add their exponents, wouldn't we? This is why adding exponents cannot work here. But the symmetries between squaring and what we are doing is intriguing. One feels that box might contain some secrets.


From one direction, we really should be adding the exponents to get the result. That is where Gauss's and Eisenstein's lemmas start. They compute an exponent for each (p/q) and (q/p) with -1 as the base. Say these are u and v respectively. Then (p/q)(q/p) = (-1)u + v.

From the other direction, the product gives the desired result. That is the result Legendre observed when he conjectured the result depended on whether p and q were congruent to 1 or 3 modulo 4. Note that (p-1)(q-1)/4 is just the statement of the desired parity using p and q mod 4.

To connect the two directions are transformations from the sum of those exponents to that product that preserves parity.

----------


## desiresjab

> From one direction, we really should be adding the exponents to get the result. That is where Gauss's and Eisenstein's lemmas start. They compute an exponent for each (p/q) and (q/p) with -1 as the base. Say these are u and v respectively. Then (p/q)(q/p) = (-1)u + v.
> 
> From the other direction, the product gives the desired result. That is the result Legendre observed when he conjectured the result depended on whether p and q were congruent to 1 or 3 modulo 4. Note that (p-1)(q-1)/4 is just the statement of the desired parity using p and q mod 4.
> 
> To connect the two directions are transformations from the sum of those exponents to that product that preserves parity.


I think we could be on the right track with these thoughts. Hell, maybe you have already seen it through and through. All I know is I haven't.

----------


## YesNo

I don't know much about quadratic reciprocity. I haven't figured it out. Besides, once we get past quadratic reciprocity, there's cubic, and then quartic and then on and on for all I know. So we have only skimmed the subject.

I have been trying to put together the Artin's Conjecture puzzle. I'm starting with 2 and trying to find ways to get an infinite number of primes for which 2 is a primitive root. I think it is easy to get an infinite number of composite numbers for which 2 is a primitive root: just take an odd prime p for which 2 is a primitve root, then it should be a primitive root for pn if (2/p) = -1 (mod p2). Actually, I am not sure that is right, but it doesn't solve Artin's Conjecture. That requires an infinite number of primes, not composites.

To keep my motivation, I have started asking questions on math.stackexchange centered around trying to show that there are an infinite number of Germain primes. These are primes p such that 2p + 1 is also a prime. They are easier to work with. If there are infinitely many of them that should solve Artin's Conjecture for a = 2.

----------


## desiresjab

> I don't know much about quadratic reciprocity. I haven't figured it out. Besides, once we get past quadratic reciprocity, there's cubic, and then quartic and then on and on for all I know. So we have only skimmed the subject.
> 
> I have been trying to put together the Artin's Conjecture puzzle. I'm starting with 2 and trying to find ways to get an infinite number of primes for which 2 is a primitive root. I think it is easy to get an infinite number of composite numbers for which 2 is a primitive root: just take an odd prime p for which 2 is a primitve root, then it should be a primitive root for pn if (2/p) = -1 (mod p2). Actually, I am not sure that is right, but it doesn't solve Artin's Conjecture. That requires an infinite number of primes, not composites.
> 
> To keep my motivation, I have started asking questions on math.stackexchange centered around trying to show that there are an infinite number of Germain primes. These are primes p such that 2p + 1 is also a prime. They are easier to work with. If there are infinitely many of them that should solve Artin's Conjecture for a = 2.


I have put my ten thousand hours of music in; I have put my ten thousand hours of writing prose and poetry in. With these enterprises I did much more than that. But I must be at about five thousand hours of math because the moves are not instinctive the way they would be with professional mathematicians. It takes me a long time, I make a lot of mistakes and move in with false assumptions quite often that later embarrass me.

For we amateurs maybe that is the way of it. Most unsloved problems scare me off because I know my inabilty to resist temptations. I only move in with a problem when it advances the cause of advancing my learning, or of it has a particular form that intrigues me. Sometimes they have been fun problems that I found here or there; I got into criptorithms for a while. Most of the time now I move in where I think I can learn as much as possible.


I have lived with Fermat's little theorem, Euler's phi function and quadratic reciprocity in my time. I am going to court Euler's divisor functions coming soon, because the few kisses I stole were not enough. That girl has a lot to say. Very centrally connected. After that I have to move in with complex numbers for a long time. Meta mathematics has been going on there since Gauss formalized the language.

I need two minds or more. The other side has now pulled me back to my fictional trilogy. That enterprise is where quadratic reciprocity is--awaiting the final clarity of events that make everything fit. The series is done except for some middle chapters left out in the inspirational heat of moving forward.

----------


## YesNo

On the one hand I am not trying to solve Artin's Conjecture. I'd be happy with the subjective process of understanding it before moving onto something else. On the other hand I feel like a teenager looking at the stack of books in the library and planning on how to read all of them. What probably counts is the subjectivity involved in understanding something.

Have you ever posted your question on a mathematics forum about the number of lattice points in Eisenstein's triangles? That sounds like an interesting puzzle. I haven't heard it mentioned except here, but then I don't know much about lattice points. People only started counting lattice points a few decades ago based on some cursory research I did into your problem.

Regarding those lattice points, if p = q then the two triangles should have the same number of lattice points since the slope of the diagonal would be 1 and it should go through all the lattice points on the diagonal. Letting q = p + 2 keeps that slope close to 1 and the diagonal misses all the lattice points. That's how I'm looking at the problem at the moment.

----------


## desiresjab

Within a short while I will see the final piece. I don't need them now.

----------


## YesNo

Does anyone participate in grid computing volunteer projects by letting those projects use their computer's resources in the background? 

I understand some of them allow one to participate in searches for extraterrestrial life or pulsars or even special prime numbers. Here is the site: http://boinc.berkeley.edu/download.php

----------


## Dreamwoven

I think that is to give access to your computer for them to use it while you sleep or are away.

----------


## YesNo

Yes, they would use it for some project you signed up for. It could be something related to cosmology (like gravitational waves) as well as astronomy or health or even mathematics (finding primes). 

I haven't signed up for it, but I do have an old computer that I might as well clean up and turn on for them to use.

It is sort of like a large communal gaming system that might have some use-value besides the game itself.

----------


## YesNo

I started up the old laptop, downloaded the BOINC software, installed it as administrator, signed up for PrimeGrid and the computer is now happily busy again. Well, I don't know if it is happy about that, but I am glad to put it to use.

----------


## YesNo

I started reading Jimena Canales, "The Physicist and the Philosopher: Einstein, Bergson, and the debate that changed our understanding of time".  

I am hoping it will help me understand the cultural context in which we view cosmology today. The debate mentioned in the book occurred on April 6, 1922, and apparently has cultural influence to the present day. I hadn't heard of it before, but we don't have to be aware of the influences we have, especially those we take for granted.

----------


## desiresjab

My conclusion is that Eisenstein already knew what he was after. He was after a way of arriving back at Euler's criterion through his lattice point rectangle representation, which he managed to do. He shows that the number of even lattice points in ABC has the same character as the total points in CZY which is no different from AYX. Then it is seen that WAY is also equal to the unmarked triangle in his diagram that sits under CZY.

He shows that his exponent u+v is equal to (p-1/2) (q-1/2). Very good. The deed is done.

However, it does not show or explain the mechanics of why moduli behave toward each other according to species, landing or not landing in one another's quadratic residue sets. You cannot get those mechanics from this proof. It is the wrong kind of proof for that, it does not delve there. Viewing those mechanics may require learning some new mathematics. I am far more interested in the mechanics than the proof itself, it turns out. I need to see different proofs to determine which one explains the mechanics I am after. 

The same mechanism seen in Eisenstein's proof that clearly explains the behavior of 4n+3 primes and pinpoints the reason for it does not explain the quadratic behavior of moduli in general. Without foreknowledge I believe there is no way Eisenstein could have worked backwards from his diagram to explain that quadratic reciprocity was merely Euler's criterion.

----------


## YesNo

There is only so much one can get from any one proof. Then one has to look elsewhere for other interesting ideas.

I put together a Google sheet trying to test your conjecture that the number of lattice points in the two triangles for twin primes are equal. It looks like it works for twin primes less than 100. That is all the further I tested it.

Another topic that has caught my attention are the lengths of prime gaps. For twin primes the length would be 2, however, the gap could be arbitrarily large.

----------


## desiresjab

> There is only so much one can get from any one proof. Then one has to look elsewhere for other interesting ideas.
> 
> I put together a Google sheet trying to test your conjecture that the number of lattice points in the two triangles for twin primes are equal. It looks like it works for twin primes less than 100. That is all the further I tested it.
> 
> Another topic that has caught my attention are the lengths of prime gaps. For twin primes the length would be 2, however, the gap could be arbitrarily large.


There is any gap as long as you please if you go far enough out the number line. That has been proven. That proof must be a hard nut.

----------


## YesNo

> There is any gap as long as you please if you go far enough out the number line. That has been proven. That proof must be a hard nut.


Yes. The proof that the gap can be arbitrarily large is well known. That there are infinitely many twin primes has not been solved. That's the hard problem. Here is a status from a Wikipedia article: https://en.wikipedia.org/wiki/Twin_prime

_On April 17, 2013, Yitang Zhang announced a proof that for some integer N that is less than 70 million, there are infinitely many pairs of primes that differ by N.[1][2] Zhang's paper was accepted by Annals of Mathematics in early May 2013.[3] Terence Tao subsequently proposed a Polymath Project collaborative effort to optimize Zhangs bound.[4] As of April 14, 2014, one year after Zhang's announcement, according to the Polymath project wiki, the bound has been reduced to 246._
So there are infinitely many gaps of size as small as 246. One just has to get that down to 2.

----------


## YesNo

> I started up the old laptop, downloaded the BOINC software, installed it as administrator, signed up for PrimeGrid and the computer is now happily busy again. Well, I don't know if it is happy about that, but I am glad to put it to use.


The computer crashed a few times, but I finally realized I should run it at 50% capacity rather than 100% capacity to give it a chance to cool off. Also I put it on top of small objects so that the heat can move away faster. But I got my first badge for running 10,000 points worth of stuff.

----------


## Danik 2016

Just trying to help with the posts. I am new at this forum and I myself lost some texts because this forum has a time out problem. Now, if it is a small post like this one I keep saving and editing it on the edit pad. If it is a long post like yours I write it on Word and then paste it on the forum page. 


> I lost another long post because of the idiotic setup of this forum. I am about done with this goat hole. It does not matter if I login first or not, it always tells me I do not have permission to post when I try to send my post, and I have to go through some other crap. Sometimes I have lost the post in the process. The people who run this outfit need to explain themselves.
> 
> Anyway, that was a great link. Right now I do not feel like trying to recreate my detailed post, so I will let it go for now.

----------


## desiresjab

> Just trying to help with the posts. I am new at this forum and I myself lost some texts because this forum has a time out problem. Now, if it is a small post like this one I keep saving and editing it on the edit pad. If it is a long post like yours I write it on Word and then paste it on the forum page.


My problems disappeared, perhaps yours will too.

----------


## YesNo

> I started reading Jimena Canales, "The Physicist and the Philosopher: Einstein, Bergson, and the debate that changed our understanding of time". 
> 
> I am hoping it will help me understand the cultural context in which we view cosmology today. The debate mentioned in the book occurred on April 6, 1922, and apparently has cultural influence to the present day. I hadn't heard of it before, but we don't have to be aware of the influences we have, especially those we take for granted.


After reading a couple chapters in this, I realize that Einstein was involved in two conflicts. One of them was with Bohr over quantum physics and the other was with Bergson over the reality of time. In both, Einstein won the reputation prize since he is remembered better than Bohr and Bergson, but he lost the quantum physics debate to Bohr and, as I am beginning to see, he likely also lost the time debate to Bergson. However, with Bergson, I don't understand the issues at stake as well. They involve time dilation and the reality of various measurements, but that is as far as I've got. This question relates to a cosmology thread in that it questions the "reality" of "space-time".

----------


## YesNo

I am over half way through Canales' history of the debate between Bergson and Einstein. 

I realize I would likely be on Bergson's side. By putting time and space together into "space-time" Einstein created a deterministic block universe where nothing new could happen. This doesn't fit reality as I experience it. So, being pragmatic, I assume there's something wrong with it.

Also it looks like Poincare and Lorentz, who came up with the measurable effects of relativity theory before Einstein did might have better ways of interpreting it, but I am still trying to figure out what those different interpretations are. 

At a high level, the difference between Bergman and Einstein is obvious. Einstein gets a mathematical theory (from Lorentz) and then assumes that his preferred model_ is_ reality rather than just a way to model reality. Time is linked to space, because it is convenient for the mathematics to manipulate it that way. It is sort of like following Galileo and saying the Sun _really is_ the center of the universe regardless of our current view that that is no more true than saying the Earth is the center of the universe. On the other hand Bergson is interested in presenting the lived experience of time which is not deterministic.

Canales did a good job of bringing in the various people who participated in this debate in the 20th century. I didn't realize how connected all of these ideas were.

----------


## YesNo

I finished the book a couple of days ago. I will probably have to read it again after it all settles.

The main problem in the book is whether time is_ really_ a succession of infinitesimal instants such as a point on a mathematical line or whether it is something with duration that we access through our subjectivity moving from a past to a future. For practical purposes, like synchronizing clocks, there is use-value in modeling time as a point on a mathematical line. That is not the question. The question is whether time _really_ is a dimension of infinitesimal time-points linked to three dimensions of space-points called "spacetime". Einstein claims that spacetime is real. Bergson claims that the model has use value, but nonetheless it is a mathematical fiction falsified by our own experience of time.

One of the consequences of believing in spacetime is that the universe is then a "block" with four mathematical dimensions in which nothing happens. Both the future and the past are illusions. There is no "arrow of time". Nothing evolves. Why would that be the case? Because the mathematical equations used to model the universe don't change. Einstein promoted these equations from Lorentz' relativity model to reality itself. This allowed time to be _reversible_ in Einstein's view of reality.

Belief in spacetime implies belief that light is the maximal speed and that it is a mathematical (and physical) constant. It is not just that it is convenient to view light as constant, but that light really has a constant speed, not only today but throughout the history of the universe. Indeed it is convenient to make that assumption. Around 1900 people were looking for something that didn't change on which they could base measurements of both length and duration. They couldn't find anything until some decades earlier it was discovered that light had a finite speed and then that it looked like we could not detect a difference in its speed.

Belief in spacetime is just one interpretation of relativity. Relativity itself predates Einstein's deterministic interpretation. Lorentz and Poincare had the mathematical model of special relativity prior to Einstein and neither Lorentz nor Poincare accepted Einstein's interpretation of it. What that means is that we can have relativity without being forced to accept Einstein's block universe determinism. What I wonder is to what extent this also applies to general relativity. 

Finally there is the media problem with relativity. Do we (that is you and me as people hearing the scientific discussions) accept a scientific interpretation on rational grounds or because it has been promoted in the media with questionable rhetoric? Is science for us a political event? In the case of Einstein he went out of his way to promote his interpretation in the media and he used ad hominem arguments against those opposing his views suggesting that they were too stupid to understand him or that they were antisemitic. Now Bergson was also Jewish, so the debate wasn't about antisemitism. I doubt that the people who disagreed with Einstein were any more stupid than the people promoting Einstein's own interpretation.

As a conclusion, I think Einstein's deterministic block universe speculation has been falsified both by quantum physics and by the progression of living organisms from birth to death. Time, whatever it is, is not reversible. Hence Einstein's interpretation of Lorentz' original relativity theory is false. Nor is it needed to keep the benefits of relativity theory. Also, the politics involved in that discussion makes me wary of the politics involved in scientific discussions that we hear today.

----------


## Dreamwoven

This is an interesting philosophical argument. What do you mean by politics in your last sentence? Alternatively, what would not be a political argument that would be acceptable?

----------


## YesNo

One probably cannot avoid politics in discussions about science. On one level it involves which group of scientists get a bigger share of limited funding. On another it involves which view of reality will dominate our own common sense. 

For example, I've been aware of the name Einstein since a child. I didn't even hear of Bohr until a few years ago when I started looking at quantum physics. I didn't know the name Bergson until I read this book. Although I heard the name Lorentz because of the mathematics grounding relativity, he appeared more as a footnote in the theory of relativity. It was his idea. Why is Einstein so front and center in my common sense? He didn't win his debate against Bohr. He shouldn't have won any debate against Bergson. That is probably not a "politics" in the strict sense of who will govern, but it affects who governs my common sense just as a political candidate might and the ad hominem rhetorical techniques to promote one person over the other seem similar.

----------


## YesNo

I am reading John Derbyshire's "Prime Obsession". He tried to explain the Riemanann Hypothesis in such simple terms that any one could understand it. I'll have to see if he was right. At one point when he was explaining the derivative he made this statement (page 108):

_The steepness of the curve varies from point to point. At every point it has a definite numerical value, though, just as your automobile has a definite speed at any point while you are accelerating--namely, the speed you see if you glance at the speedometer._
That made me realize how naively we move from a mathematical model to reality. Does reality really have "points"?

Zeno made the assumption that reality did have points and from there concluded that no motion could occur. I think Zeno was right. If space and time were mathematical lines with points, nothing could happen. I don't know if Zeno thought his conclusion implied that no motion really occurred or whether he was trying to show that reality could not be made out of mathematical points.

It was not just philosophers who questioned mathematically continuous reality. Quantum physics started with rejecting a view that energy was equitably distributed across an infinite number of frequencies in the black body problem. Planck got around the problem by saying that energy was quantized and not continuous. Then things started to work.

Given Zeno and Planck, it is safe to say that reality and a mathematical continuum need not mix except at a level of approximation where one isn't looking too closely.

----------


## Dreamwoven

It was when the discussion became mathematical that I lost the thread.

----------


## YesNo

Cosmology today is full of mathematics. Rather than having conscious Gods who act as agents, like ourselves, such as Zeus or Thor or Brahma or Yahweh we have today unconscious mathematical equations with t representing time that has been objectified into a line of points. 

Mathematics is a god with a lower "g" because that god is unconscious. But we don't think of these modern cosmological stories as myths and literature because we are caught in their enchantment, or bedevilment, depending on one's perspective.

----------


## desiresjab

I picked up Prime Obsession a few months ago and have been proceeding by jerks and starts until I am about 3/4 through. It always helps to get more things straight about these topics that are so difficult one may not approach them directly. One learns never to be surprised to find that work by Euler led to major developments by Reimann, or that Poincare anticipated giants not only in relativity but fractal geometry as well, which is a first cousin of chaos theory. Like Reimann, Poincare possessed high gifts in both math and physics. But he got to live longer. He was also a communicator whereas Reimann is often described as painfully shy.

Anyway, I know nothing of Bergson. From reading what you wrote I am unsure if his work was mainly philosophical or whether he muddied his hands with mathematics. I could check Wiki-pejia.

I am wondering how much impact a philosophical interpretation can have. It can alter the administration of things but cannot alter any mathematical truths. I believe philosophical interpretations are very important to the development of theories nonetheless. They can nudge toward particular investigations if the philosophical theory happens to be correct. 

Anything that might reduce the time involved for grappling with old arguments is all right with me. As we discussed earlier and now again, issues from 1916 and 1922 are still being seriously debated in physics.

Mathematics, philosophy and physics are thoroughly bound up with each other now, because so many discoveries in math and physics bring heavy baggage to the table and renew the call for philosophy. Popularizers are indespensable for 99.99999% of people. Only the ones able to dig all the way to the roots are technically self reliant. Those are the people like Gauss, Einstein, Reimann, Poincare and Tao _et al_, and their bright disciples. 

The long and short of that is, unless one is extremely bright and undertook this journey from an early age, acquiring all the right tools to investigate relativity or the Reimann conjecture, one's understanding will be a popularized understanding bereft of the technical details required for a truer grip.

Philosophy itself is changing as the need to interpret an explosion of physical theories presents new challenges. Technically versed writers operating one or two levels of abstraction above the "machine language" of folk like the prestigious list above, interpret the "message" for the interested masses, who are yet several more levels of abstraction above. Though not a new face of philosophy, it is now necessarily a prominent one.

It seems my vacation has made me talkative. 

Einstein was a great image. The wild hair alone set him apart. His scientific disputes were more gentlemanly than most. But let us not forget, it was Einstein and not the others who made a verifiable prediction on the transit of Mercury. Predictions verifiable through measurement or experiment are of extreme value in scientific accomplishment.

The above merely by way of offering an explanation of why Einstein's interpretation of space-time might have prevailed philosophically over his anticipators and rivals.

All things are politicized, especially in our era. You can't get a right answer to anything, and that is typical.

----------


## YesNo

I am about half way through Prime Obsession and it is putting the Riemann Hypothesis in a perspective, both mathematical and historical, that makes sense. I'm glad to be reading it. 

After reading Canales I don't think spacetime is anything more than a fiction. I don't know how the transit of Mercury fits into the creation of the brand name "Einstein", but I see "Einstein" as a marketing brand for an underlying cultural commodity. I don't think reality can be divided into points or instants. Planck's constant would be one argument against that and then there are Zeno's arguments that such a view would not allow motion to exist.

----------


## desiresjab

> I am about half way through Prime Obsession and it is putting the Riemann Hypothesis in a perspective, both mathematical and historical, that makes sense. I'm glad to be reading it. 
> 
> After reading Canales I don't think spacetime is anything more than a fiction. I don't know how the transit of Mercury fits into the creation of the brand name "Einstein", but I see "Einstein" as a marketing brand for an underlying cultural commodity. I don't think reality can be divided into points or instants. Planck's constant would be one argument against that and then there are Zeno's arguments that such a view would not allow motion to exist.


Only mathematics is not one of these useful fictions in my view, which is why I keep bringing it up. The Ptolemaic, Newtonian clock, big bang, space-time and multi-verse models are all fictions to me, wonderful fictions that advance our research and our journey. In the future their will be many more cosmic models, some will gain ascendancy for a while. I believe it is an infinite process. The most precious answers will always remain in question form.

Then again, Poincare among others considered Cantor's transfinite set theory to be a fiction. Whether this fiction has ever shown a useful or practical side, I am not sure. Cantor and others do show how transcendental numbers can be constructed. That is as close to a practical application as I know of. Modular arithmetic became the language of digital computer encryption. I think transfinite set theory is still only the language of itself, but I would not be shocked to learn I was wrong, either, because each difficult endeavor requires major effort to penetrate, and I haven't given enough to that argument.

Anyway, once we come to terms that mathematics is not one of these fictions, we are able to make a necessary separation. Mathematicians themselves proceed creatively, but the end result is the discovery of the obvious, the discovery of the only way things could ever have been with regard to the numbers.

The numbers can cause scientific theories to bloom or fade. An equals sign means eqaul cardinality, in the end. Whenever, finally, the numbers do not work out, the theory does not work out either, and goes away. String theory may be in the process of doing this.

Science without numbers cannot be precise. A pinch, a nubbin and a nip are not consistent unless they are standardized, as gram, milligram and microgram are.

Up to a point somtimes quite deep mathematics will support wonderful fictions then suddenly leave off its protection, exposing the fiction as a false route out of the maze. Euler once devised a formula that explicitly produced someting like the first forty-some-odd primes but thereafter could no longer be trusted.

Personally, I feel that if we knew everything about the nature and behavior of prime numbers, such knowledge would somehow provide ultimate answers to just about everything. That is why it is exciting to see the Reimann conjecture at the very front of mathematical research, since ultimately it is a conjecture about prime numbers. Prime numbers have always been a hot topic in mathematics, but it is good to see them clearly at the forefront and so much talent now concentrated on them. This can only lead to great things. Of course these coming answers will have big echoes in science and philsosophy.

----------


## desiresjab

Since I am having a lot of thoughts right now, I may as well go on with them. With regards to my conjecture involving twin primes, its extension to prime triplets is put in serious jeopardy by the simple observation that two 4n+3 primes that are almost unfathomably far out the number line and only four units apart, will produce something extraordiarily close to a square yet which a diagonal will neverhteless always cut into two unequal quantities of lattice points in the appropriate quadrant of Eisenstein's diagram. This closeness to a square is the main reason I made the conjecture in the first place, so in some sense that puts the whole conjecture at jeopardy. I need a way to attack the problem.

----------


## Dreamwoven

> Only mathematics is not one of these useful fictions in my view, which is why I keep bringing it up. The Ptolemaic, Newtonian clock, big bang, space-time and multi-verse models are all fictions to me, wonderful fictions that advance our research and our journey. In the future their will be many more cosmic models, some will gain ascendancy for a while. I believe it is an infinite process. The most precious answers will always remain in question form.
> 
> Then again, Poincare among others considered Cantor's transfinite set theory to be a fiction. Whether this fiction has ever shown a useful or practical side, I am not sure. Cantor and others do show how transcendental numbers can be constructed. That is as close to a practical application as I know of. Modular arithmetic became the language of digital computer encryption. I think transfinite set theory is still only the language of itself, but I would not be shocked to learn I was wrong, either, because each difficult endeavor requires major effort to penetrate, and I haven't given enough to that argument.
> 
> Anyway, once we come to terms that mathematics is not one of these fictions, we are able to make a necessary separation. Mathematicians themselves proceed creatively, but the end result is the discovery of the obvious, the discovery of the only way things could ever have been with regard to the numbers.
> 
> The numbers can cause scientific theories to bloom or fade. An equals sign means eqaul cardinality, in the end. Whenever, finally, the numbers do not work out, the theory does not work out either, and goes away. String theory may be in the process of doing this.
> 
> Science without numbers cannot be precise. A pinch, a nubbin and a nip are not consistent unless they are standardized, as gram, milligram and microgram are.
> ...


I basically agree with desiresjab on this.

----------


## YesNo

> Since I am having a lot of thoughts right now, I may as well go on with them. With regards to my conjecture involving twin primes, its extension to prime triplets is put in serious jeopardy by the simple observation that two 4n+3 primes that are almost unfathomably far out the number line and only four units apart, will produce something extraordiarily close to a square yet which a diagonal will neverhteless always cut into two unequal quantities of lattice points in the appropriate quadrant of Eisenstein's diagram. This closeness to a square is the main reason I made the conjecture in the first place, so in some sense that puts the whole conjecture at jeopardy. I need a way to attack the problem.


I think you might be right about the twin primes and the number of lattice points, but all I have to go on are tests for twins below 100. I don't understand what you are saying about prime triplets. They would be numbers of the forms: p, p + 2 and p + 6 or p - 4, p and p + 2 where all of these are prime.

----------


## YesNo

> Only mathematics is not one of these useful fictions in my view, which is why I keep bringing it up. The Ptolemaic, Newtonian clock, big bang, space-time and multi-verse models are all fictions to me, wonderful fictions that advance our research and our journey. In the future their will be many more cosmic models, some will gain ascendancy for a while. I believe it is an infinite process. The most precious answers will always remain in question form.


The problem is that mathematics need not apply to the universe. I view mathematics as a game that might have some practical value, but need not have any practical value. I see mathematics like the game of chess. Although chess has kings, queens, knights, bishops and pawns, we cannot expect those fictions to represent real kings, queens, knights, bishops or peasants any more than we can expect a mathematical structure to represent reality.




> Then again, Poincare among others considered Cantor's transfinite set theory to be a fiction. Whether this fiction has ever shown a useful or practical side, I am not sure. Cantor and others do show how transcendental numbers can be constructed. That is as close to a practical application as I know of. Modular arithmetic became the language of digital computer encryption. I think transfinite set theory is still only the language of itself, but I would not be shocked to learn I was wrong, either, because each difficult endeavor requires major effort to penetrate, and I haven't given enough to that argument.


The universe has to be finite for life to exist in it. That would be suggested by Olber's paradox. Now there may be infinitely many universes and I suspect there are other universes than ours given the evidence that ours had a beginning, but outside of that possible infinity of universes, transfinite numbers have no use value.




> Science without numbers cannot be precise. A pinch, a nubbin and a nip are not consistent unless they are standardized, as gram, milligram and microgram are.


I am not saying that numbers are not useful. All I am saying is that the universe does not go arbitrarily small which would be required if points actually existed. The physical justification for that is the need for Planck's constant.




> Personally, I feel that if we knew everything about the nature and behavior of prime numbers, such knowledge would somehow provide ultimate answers to just about everything. That is why it is exciting to see the Reimann conjecture at the very front of mathematical research, since ultimately it is a conjecture about prime numbers. Prime numbers have always been a hot topic in mathematics, but it is good to see them clearly at the forefront and so much talent now concentrated on them. This can only lead to great things. Of course these coming answers will have big echoes in science and philsosophy.


The problem with mathematics is that we think its ability to perform an analysis step such as splitting composites into smaller primes and then doing a synthesis step of multiplying those primes to get the composite back again is something that might also work in physical reality. It might not. That is, mathematical reductionism, represented by the reduction of composites to primes, may only work well within mathematics.

----------


## desiresjab

> I think you might be right about the twin primes and the number of lattice points, but all I have to go on are tests for twins below 100. I don't understand what you are saying about prime triplets. They would be numbers of the forms: p, p + 2 and p + 6 or p - 4, p and p + 2 where all of these are prime.


What I now realize is that tremendous size and lying close together is not the key at all, is not what makes two primes behave a certain way in QR. The key is just as simple, however. The key is how many factors of 2 are involved in (p-1))(q-1), from the pure Eisensteinian perspective.

With two 4n+3 primes, p-1 and q-1 each have only a single factor of 2 to contribute. That is to say after dividing the total number of interior lattice points by 4, we come to an odd number, which of course cannot be divided evenly, so the two numbers have to have opposite characters when WAXY is divided once more by the diagonal. If only one more factor of 2 is available (which it always will be, as long as both primes are not 4n+3) to deal with this further division performed by the diagonal, then the diagonal will be dividing (apportioning) an even number of points in WAXY. If the power of 2 in the multiplication is only 23, then WAXY is forced to produce negative exponents for both primes upon the further division by the diagonal. But if the power of 2 is 24 or greater the exponents must always both be positive.

It appears that the nature of the exponents (and thereby reciprocity) depends only on the power of 2 in (p-1)(q-1), nothing else, in terms of Eisenstein's representation. The essence, the causal mechanism is none other than highly evenness. This was my original insight when I first started thinking about Brocard's problem and switched to QR, and before I actually understood what I was talking about. It has taken me this long to understand that my own insight was hitting the nail on the head squarely, to intend a pun.

With twin primes we are always guaranteed at least 23. What we simply need in order to always be even, i.e. in one another's residue set, is a 4n+1 number wherein n itself carries at least one factor of 2. From knowing nothing more we can always state the character of both primes with respect to each other in QR.

* * * * *

QR does not work when p=q. Imagine an 11X11 square. φ is 110, not 100. This means you cannot even get four squares (not rectangles in this case) all with equal lattice points.

But what about a 17X17 square where there are plenty of 2's to go around? This case will provide four equal squares all right. But it is a dead end, a non-sequitur, because no number between 1 and 16 inclusive will ever square out to 17 (mod 17), and so forth for all primes.

A visibly cogent fact is that the line p=q on graphing paper is a 45 degree angle and is our diagonal, and goes through all the points (1,1), (2,2), (3,3),...(17,17). The method does not work on squares. It only works on rectangles. The diagonal hits eight lattice points in WAXY. 256 interior lattice points divided by four is 64 for our quadrant square, but eight of these cannot count because they hit lattice points, bringing WAXY down to 56 servicable points, and each small triangle to 28, indeed equal, but meaningless except perhaps for why it is meaningless. Only on rectangles where p≠q are there no lattice points on the diagonal. P and q respond identically but meaninglessly when p=q because they do not kick against one another rationing out squares under the other as modulus. At the moment I do not know how to subtract those eight extra points in the context of something meaningful, I just know eight would have to be subtracted in this particular case to somehow fictionally redirect the apparently nonsensical. This is all about finding the logic of why it is illogical for squares themselves.

Only on rectangles where p≠q are there no lattice points on the diagonal.

So where p=q, it would have to look like:

[(p-1)(q-1)]-[(p-1)/2]. 

This is

p2-2p+1-(p-1)/2, is equivalent to

2p2-4p+2-p-1=2p2-5p+1, which means nothing to me but the sense of the nonsense.

A modulus is about division and remainders, and division is about ratios, and QR is about two unequal primes acting as units for the other under the operation of squaring, spitting out squares as remainders. Pitcher and catcher. Then switch places while the other acts as divider and see which numbers its overlap spits out as squares.

* * * * *

An interesting note:

In _Prime Obsession_ Derbyshire states that 4n+3 primes consistently out number 4n+1 primes. There may be one brief interlude where 4n+1 primes hold the lead, but then it reverts back to a 4n+3 lead, supposedly for good. If they will always hold the lead is probably unproven. I cannot remember, or if he said.

----------


## desiresjab

Further note:

φ(p)-φ(p-1) seems to be the proper calculation for subtracing eight which I was trying to arrive at above for a 17X17 square.

----------


## YesNo

> What I now realize is that tremendous size and lying close together is not the key at all, is not what makes two primes behave a certain way in QR. The key is just as simple, however. The key is how many factors of 2 are involved in (p-1))(q-1), from the pure Eisensteinian perspective.
> 
> With two 4n+3 primes, p-1 and q-1 each have only a single factor of 2 to contribute. That is to say after dividing the total number of interior lattice points by 4, we come to an odd number, which of course cannot be divided evenly, so the two numbers have to have opposite characters when WAXY is divided once more by the diagonal. If only one more factor of 2 is available (which it always will be, as long as both primes are not 4n+3) to deal with this further division performed by the diagonal, then the diagonal will be dividing (apportioning) an even number of points in WAXY. If the power of 2 in the multiplication is only 23, then WAXY is forced to produce negative exponents for both primes upon the further division by the diagonal. But if the power of 2 is 24 or greater the exponents must always both be positive.
> 
> It appears that the nature of the exponents (and thereby reciprocity) depends only on the power of 2 in (p-1)(q-1), nothing else, in terms of Eisenstein's representation. The essence, the causal mechanism is none other than highly evenness. This was my original insight when I first started thinking about Brocard's problem and switched to QR, and before I actually understood what I was talking about. It has taken me this long to understand that my own insight was hitting the nail on the head squarely, to intend a pun.
> 
> With twin primes we are always guaranteed at least 23. What we simply need in order to always be even, i.e. in one another's residue set, is a 4n+1 number wherein n itself carries at least one factor of 2. From knowing nothing more we can always state the character of both primes with respect to each other in QR.


It makes sense that the factors of 2 would be important here.




> QR does not work when p=q. Imagine an 11X11 square. φ is 110, not 100. This means you cannot even get four squares (not rectangles in this case) all with equal lattice points.
> 
> But what about a 17X17 square where there are plenty of 2's to go around? This case will provide four equal squares all right. But it is a dead end, a non-sequitur, because no number between 1 and 16 inclusive will ever square out to 17 (mod 17), and so forth for all primes.
> 
> A visibly cogent fact is that the line p=q on graphing paper is a 45 degree angle and is our diagonal, and goes through all the points (1,1), (2,2), (3,3),...(17,17). The method does not work on squares. It only works on rectangles. The diagonal hits eight lattice points in WAXY. 256 interior lattice points divided by four is 64 for our quadrant square, but eight of these cannot count because they hit lattice points, bringing WAXY down to 56 servicable points, and each small triangle to 28, indeed equal, but meaningless except perhaps for why it is meaningless. Only on rectangles where p≠q are there no lattice points on the diagonal. P and q respond identically but meaninglessly when p=q because they do not kick against one another rationing out squares under the other as modulus. At the moment I do not know how to subtract those eight extra points in the context of something meaningful, I just know eight would have to be subtracted in this particular case to somehow fictionally redirect the apparently nonsensical. This is all about finding the logic of why it is illogical for squares themselves.
> 
> Only on rectangles where p≠q are there no lattice points on the diagonal.
> 
> So where p=q, it would have to look like:
> ...


The idea of two unequal primes acting as units makes sense. 




> An interesting note:
> 
> In _Prime Obsession_ Derbyshire states that 4n+3 primes consistently out number 4n+1 primes. There may be one brief interlude where 4n+1 primes hold the lead, but then it reverts back to a 4n+3 lead, supposedly for good. If they will always hold the lead is probably unproven. I cannot remember, or if he said.


I think I remember reading something like that in Derbyshire's text. I think ultimately the ratio of the number of primes in the two sets are supposed to converge to 1 implying they have the same number, but initially the 4n + 3 set has more. I couldn't find the page to reference it.

----------


## desiresjab

It just so happens it would be possible to devise extremely far from square rectangles in which either p-1 or q-1 was loaded with factors of 2. We assume p and q are both odd primes, as usual, with one of them a 4n+1 prime and the other a 4n+3, to keep matters clear, and to put the burden of factors of 2 entirely on the minus one of the 4n+1 prime.

It takes two dvisions by 2 to divide the rectangle into quadrants, and one more division by 2 to diagonally divide the bottom left and top right quadrants as in Eisenstein's diagram.

The p-1 or q-1 of all 4n+3 primes have only one factor of 2 to give. As long as the 4n+1 prime provides two factors of 2, which it must at least do by its nature, all three divisons can take place preserving the possibility of triangles WAY and YAX containing the same number of lattice points.

We do not really expect it, though, in the case of a very eccentric rectangle ABCD. Quite the opposite. We expect there not to be the same number of lattice points in WAY and YAX.

My conjecture about twin primes seems to involve both concepts--low eccentricity and factors of 2. So far, I am too dumb to prove it. I cannot even say if it is provable, or if it has already been proven.

----------


## desiresjab

Circumstantial support for the conjecture also lies in the fact that in all cases where p has more than two factors of 2, the diagonal will be dividing columns of even numbers. If WAY and YAX differ in the number of their lattice points they must differ by at least two, which is harder geometrically to do for a low eccentricity rectangle than to differ by one, it seems to me. I don't see how it could happen. I say it cannot.

I can close in on it logically but so far cannot find a mathematical method to prove it.

----------


## desiresjab

Even numbers of course can be partitioned into two evens or two odds in a variety of ways. 12 is 11+1 and 10+2, also 9+3 and 8+4, etc.

To acheive an eccentric additive partition in an Eisenstein count of lattice points would seem to requires high eccentricity on the part of the rectangle ABCD, i.e., less squareness. Right?

The eccentricity is why for the eccentric Eisenstein rectangle 5X13 (two 4n+1 primes), the count for WAXY is 12 lattice points yet a division into YAX and WAY through a diagonal division yields 7+5, allowing the QR to be negative, since 5 and 7 are both odd, all that is required.

The above result begs the question of whether there are two 4n+1 primes vastly far out the number line which are only four units apart, giving their rectangle profoundly low eccentricity, yet which somehow acheive a difference of two in their count of lattice points for YAX and WAY? Or are they all forced due to extremely low eccentricity to always be even, even, and furthermore identical in value?

This is another perspective to the conjecture.

----------


## desiresjab

A key thing to consider is that if you do choose two 4n+1 primes far out the number line separated by only four units, one of them is forced to have only two factors of 2. Evennes follows a pattern in the even numbers. Here is a list read left to right for the even numbers 2, 4, 6, 8...., showing how many factors of two each even number contains.

1, 2, 1, 3, 1, 2, 1, 4 1, 2, 1, 3, 1, 2, 1, 5 1, 2, 1, 3, 1, 2, 1, 4

1, 2, 1, 3, 1, 2, 1, 6 1, 2, 1, 3, 1, 2, 1, 4 1, 2, 1, 3, 1, 2, 1, 5 ...


My guess is that if the minus ones of two "twin" 4n+1 primes far out the number line can indeed acheive an eccentric partition (anything other than a split down the middle) it must be because as we can see from the list, that one of them has a mere two factors of 2. The other one can be slightly richer in 2's, or vastly richer. Now I am wondering if the size of that difference plays any part, if, of course, an eccentric partition under any condition can occur at all.

I don't think anyone else is going to do it, so it must be up to me. Surely there are a few sets of twin 4n+1's far enough out for low eccentricity yet within range of calculational investigation using some available tools.

The list above is called the ruler function. Believe it or not, that discontinuous thing has even been re-tooled for calculus.

----------


## desiresjab

To get off the math, Does anyone believe there is such a thing as *the* Nature of Reality, or is that another comfortable metaphor?

Is what we perceive reality? Does this include people hallucinating, too?

_Or in the night imagining some fear
How easy is a bush supposed a bear_
--Shakespeare

How much is objective reality, and how much is subjective reality?

_Whatever flames upon the night
Man's own resinous heart has fed_
--Yeats

----------


## Dreamwoven

I am much more at home in the fuzzy world of reality than in the artificial world of statistical "accuracy".

----------


## YesNo

> It just so happens it would be possible to devise extremely far from square rectangles in which either p-1 or q-1 was loaded with factors of 2. We assume p and q are both odd primes, as usual, with one of them a 4n+1 prime and the other a 4n+3, to keep matters clear, and to put the burden of factors of 2 entirely on the minus one of the 4n+1 prime.
> 
> It takes two dvisions by 2 to divide the rectangle into quadrants, and one more division by 2 to diagonally divide the bottom left and top right quadrants as in Eisenstein's diagram.
> 
> The p-1 or q-1 of all 4n+3 primes have only one factor of 2 to give. As long as the 4n+1 prime provides two factors of 2, which it must at least do by its nature, all three divisons can take place preserving the possibility of triangles WAY and YAX containing the same number of lattice points.
> 
> We do not really expect it, though, in the case of a very eccentric rectangle ABCD. Quite the opposite. We expect there not to be the same number of lattice points in WAY and YAX.
> 
> My conjecture about twin primes seems to involve both concepts--low eccentricity and factors of 2. So far, I am too dumb to prove it. I cannot even say if it is provable, or if it has already been proven.


Prior to proving the conjecture, just stating it is important. 

There appears to be more to the conjecture than that twin primes have the same number of lattice points in Eisenstein's diagram. How would you state your conjecture more generally? Are there other pairs of primes, beside twins, that should have the same number of lattice points as twin primes would? Could I assume pairs of primes, one of the form 4n + 1 and the other of the form 4m + 3 have the same number of lattice points?

----------


## YesNo

> To get off the math, Does anyone believe there is such a thing as *the* Nature of Reality, or is that another comfortable metaphor?
> 
> Is what we perceive reality? Does this include people hallucinating, too?
> 
> _Or in the night imagining some fear
> How easy is a bush supposed a bear_
> --Shakespeare
> 
> How much is objective reality, and how much is subjective reality?
> ...


I agree with Dreamwoven that statistical accuracy seems artificial. That even goes for the probability in the quantum wave equations. All that statistics does is makes things look objective, but that is because it is no longer talking about the particular reality that is in front of us.

Part of the problem of reality is that we think and hope it is unconscious and objective. However, everything, including quantum reality, may be participating in various forms of subjectivity that seem foreign to our own. There may be nothing that does not participate in some form of subjectivity.

It is convenient to find ways to objectify reality. Some people rely on sacred texts. Some people rely on mathematics. Both of these work to some extent in the sense that they justify a belief that reality isn't totally chaotic. 

Does the indeterminism of quantum reality imply that quantum reality shares in a form of subjectivity of its own since it appears to be making choices when we ask it questions? I think it does. What difference does that make? Probably not much. We will still be looking for patterns we can rely on. We will still be looking to better understand the universe around us. The only difference is we should have a greater respect for the reality we participate in. It is not dead. It is not objective and it cannot be completely objectified through statistics, mathematics or a set of sacred texts.

----------


## desiresjab

> I agree with Dreamwoven that statistical accuracy seems artificial. That even goes for the probability in the quantum wave equations. All that statistics does is makes things look objective, but that is because it is no longer talking about the particular reality that is in front of us.
> 
> Part of the problem of reality is that we think and hope it is unconscious and objective. However, everything, including quantum reality, may be participating in various forms of subjectivity that seem foreign to our own. There may be nothing that does not participate in some form of subjectivity.


It takes a large pair of "mays" for what I have highlighted in red, mister.

Why must people complain that mathematical accuracy seems artificial, rather than accept that currently if there is any road toward this kind of knowledge it begins and ends somewhere with a guy tabulating statistics, and that he is, after all, tabulating in a language we created from the nature of what cannot be otherwise?

The point is, either someone has a better approach, or they do not. Of course, what good is it to us right now if it is not objectively presented? I admit there may be better ways. Will those ways please step forward?

The nice thing is that only means will the right interpreter step forward, the right philosopher, at whatever stage we are stalled. Someone once said:

_Psychology is a body of theory awaiting phenomena; parapsychology is a body of phenomena awaiting theory._ 

Do not expect numbers to write an equation for the Sermon on the Mount or the best way of life. But if someday a deep theorem paves the way to verifiable astral travel, do not be surprised either. 




> Does the indeterminism of quantum reality imply that quantum reality shares in a form of subjectivity of its own since it appears to be making choices when we ask it questions?


As Socrates did of those who leaned too heavily toward or against the existence of gods, I would call the above presumptuous at this stage, in fact throughout, from _imply_ to _of its own_ to _making choices_.

I feel _imply_ is far too strong, it means something will follow or is a necessary extension or consequence of. At any rate, slang usuage of the word is not fit for philosophical discourse where precision is needed, I am hopeful you will agree.

To say quantum reality participates in a subjectivity *of its own* is open to literally anything, you must admit. I propose that subjectivity in the quantum world is statistics, and an objective grappling with what we can only call randomness looks very different when "experienced" from the quantum perspective.

_Choice_ is a very interpretive choice of words when talking about electrons and their ilk. I think it is a purely semantic convenience.

----------


## YesNo

> It takes a large pair of "mays" for what I have highlighted in red, mister.


I use "may" to not sound too dogmatic. Replace it with "is" if you like.




> The point is, either someone has a better approach, or they do not. Of course, what good is it to us right now if it is not objectively presented? I admit there may be better ways. Will those ways please step forward?


For most people, for their immediate problems, a better approach is through some form of meditation which involves their subjectivity.

Edit: It occurred to me that another way, another better approach to knowledge that matters, is through "middle-way" ethics. Again this is a subjective and not an objective approach to knowledge. These approaches, meditation and ethics, subjective thought and intentional action, are not available to deterministic reality (computers, robots, etc) nor to any hypothetical, random, unconscious reality such as zombies. 




> Do not expect numbers to write an equation for the Sermon on the Mount or the best way of life. But if someday a deep theorem paves the way to verifiable astral travel, do not be surprised either.


I don't expect numbers to do that. That would be one reason why numbers are inadequate to answer most problems people actually have.




> As Socrates did of those who leaned too heavily toward or against the existence of gods, I would call the above presumptuous at this stage, in fact throughout, from _imply_ to _of its own_ to _making choices_.
> 
> I feel _imply_ is far too strong, it means something will follow or is a necessary extension or consequence of. At any rate, slang usuage of the word is not fit for philosophical discourse where precision is needed, I am hopeful you will agree.
> 
> To say quantum reality participates in a subjectivity *of its own* is open to literally anything, you must admit. I propose that subjectivity in the quantum world is statistics, and an objective grappling with what we can only call randomness looks very different when "experienced" from the quantum perspective.
> 
> _Choice_ is a very interpretive choice of words when talking about electrons and their ilk. I think it is a purely semantic convenience.


My use of "choice" and "subjectivity", especially when discussing reality that seems very different from our own, requires definitions which can be accepted or rejected. I accept the following definitions. 

If we test something and it gives an answer that is neither deterministic nor random, I define that answer as a "choice".

If we detect a choice, I define whatever reality made that choice as having a "subjectivity" allowing the choice to be made.

These definitions of "choice" and "subjectivity" could be applied to our own choices and subjectivity which is why I use those specific words and do not make up other words.

----------


## desiresjab

I am interested in seeing what has convinced you of quantum consciousness. I cannot read everything I would like to--I simply write too much--so I sometimes must rely on shortened syntheses from trusted associates. I do not know the details of these particular experiments, though I am familiar with the rudiments of wave-particle-slits experimentation. How did they set the experiment up in a way that led to your convictions, if that is not too strong a word?

----------


## YesNo

I don't know much about quantum physics, but I did try to understand it when discussing "many worlds" a few years ago on these forums. However, I don't think one has to know much about it to get the relevant points.

The key to the consciousness idea is "indeterminacy" which implies both that something is not completely determined and also not random as a coin toss. I'll admit that is my idea. Most people I've read who talk about quantum consciousness are referring to the consciousness of the observer, not what is observed.

In addition I am interested in the non-individualistic nature of these quantum particles and the non-local behavior of particles that are entangled. This assumes that space and time are determined by local properties that Einstein posited. Until I read Canales, I assumed they were true. Now I'm not sure.

The double slit experiments that impress me are those showing what happens on the detection screen when one or two slits are open. Those form the base cases.  It makes it look as if the slits are determining what happens. Then one tries to know which slit a quantum particle actually went through. However, just knowing that changes the wave pattern on the detection screen making it look as if two single slits were used and not a double slit, but that is after the fact of having gone through the double slit and not two single slits. So passing through a double slit is not all that is affecting what the result on the detection screen will be. The non-individualistic nature of the process is seen by passing the quantum particles one by one through the double slit without knowing which slit they went through. The pattern then becomes the same wave pattern on the detection screen which is not a random pattern implying that each individual particle (if one can continue thinking of them in this way) went through both slits at the same time and interfered with itself.

There is an additional question of just what is it that is going through those slits and arriving at the detection screen. Is it worth continuing to use the "particle" metaphor if the particle is required to go through both slits at the same time and interfere with itself? Is it even worth calling it a "wave" since one can detect which slit it went through and break the wave pattern.

----------


## desiresjab

> The key to the consciousness idea is "indeterminacy" which implies both that something is not completely determined and also not random as a coin toss. I'll admit that is my idea. Most people I've read who talk about quantum consciousness are referring to the consciousness of the observer, not what is observed.


It harkens back to "cosmic consciousness."





> In addition I am interested in the non-individualistic nature of these quantum particles and the non-local behavior of particles that are entangled. This assumes that space and time are determined by local properties that Einstein posited. Until I read Canales, I assumed they were true. Now I'm not sure.


Entangled particles seem determined. The state of one can always be known by checking the other.




> The double slit experiments that impress me are those showing what happens on the detection screen when one or two slits are open. Those form the base cases. It makes it look as if the slits are determining what happens. Then one tries to know which slit a quantum particle actually went through. However, just knowing that changes the wave pattern on the detection screen making it look as if two single slits were used and not a double slit, but that is after the fact of having gone through the double slit and not two single slits. So passing through a double slit is not all that is affecting what the result on the detection screen will be. The non-individualistic nature of the process is seen by passing the quantum particles one by one through the double slit without knowing which slit they went through. The pattern then becomes the same wave pattern on the detection screen which is not a random pattern implying that each individual particle (if one can continue thinking of them in this way) went through both slits at the same time and interfered with itself.


Sure, that is all very impressive and mysterious. It means there is an awful lot we cannot explain. But nothing in it comples me to believe quantum particles are conscious. 

We have an outside light. On top of it is a sensor which detects light or darkness to determine when to turn the light on. Under your beliefs I would have to call the sensor conscious. When the sensor gets too covered in bird sh*t the light mistakenly stays on all the time. You would still call it conscioiusness, I guess. Is sensitivity to change consciousness? 




> There is an additional question of just what is it that is going through those slits and arriving at the detection screen. Is it worth continuing to use the "particle" metaphor if the particle is required to go through both slits at the same time and interfere with itself? Is it even worth calling it a "wave" since one can detect which slit it went through and break the wave pattern.


Wavicles.

----------


## Dreamwoven

This is a bit like the latest astronomy post on "multiverses". We can't confirm that our universe is the only universe, but we build theories on the assumption that it is the only one. You might like to look at that post from space.com.

----------


## YesNo

> This is a bit like the latest astronomy post on "multiverses". We can't confirm that our universe is the only universe, but we build theories on the assumption that it is the only one. You might like to look at that post from space.com.


I couldn't find a specific article in the link. The idea of multiverses is confusing. I can think of three different versions of this. 

1) Given that the big bang occurred, and the microwave background implies some beginning occurred, then this event should not be unique. That means other universes, like our own, exist.

2) Given that it is conceivable for cosmological "constants" to be such that life could not exist, the anthropic principle implies that other universes exist so that ours supporting life could have a random chance of being.

3) To avoid the indeterminacy at the quantum level, any possibility that appears indeterminate in our universe is realized in another universe that pops into existence as soon as the indeterminate event takes place.

Of these the first seems plausible. The other two are based on metaphysical need to avoid choices occurring.

----------


## YesNo

> It harkens back to "cosmic consciousness."


That goes back to George Berkeley, I suspect. Although I don't have any problem with cosmic consciousness, I wonder if some form of consciousness is also at the quantum level to justify a panpsychism perspective.




> Entangled particles seem determined. The state of one can always be known by checking the other.


The first one measured chooses the state for both. Indeterminacy does not mean there is complete freedom of choice. I also find the non-individualistic implications interesting.




> Sure, that is all very impressive and mysterious. It means there is an awful lot we cannot explain. But nothing in it comples me to believe quantum particles are conscious.


I don't know that anyone else claims that this quantum reality is conscious. However, I don't usually have original ideas, so I suspect others have thought about it.

What compels me to consider that there is agency at that level is the absence of determinism and the absence of complete randomness. This makes me think that within some limited range a choice is made. That choice implies some form of consciousness.




> We have an outside light. On top of it is a sensor which detects light or darkness to determine when to turn the light on. Under your beliefs I would have to call the sensor conscious. When the sensor gets too covered in bird sh*t the light mistakenly stays on all the time. You would still call it conscioiusness, I guess. Is sensitivity to change consciousness?


If one gets a lot of this quantum reality together one gets objects, some we've made, such as rocks and trees and sensors and computers. These objects are more predictable. I would not say that a computer is conscious because when it works is does so deterministically as a computer, not as the quantum reality that makes up that computer. 




> Wavicles.


One of the problems with giving something like this a name is that name makes it seem as if we know something about that reality when all we know is a metaphor which may be leading us astray. The waviness of the reality is only known from results on a detection screen. Hence we assume that some interference caused the pattern prior to getting to the detection screen.

----------


## desiresjab

> I wonder if some form of consciousness is also at the quantum level to justify a panpsychism perspective.


You wonder? You certainly do. I would say you do more than that. This seems to be your primary philosophical obsession, to convince yourself or others that quantum particles make choices and the universe is conscious at many levels and therefore we have free will because of quantum indeterminancy, based on really no evidence except what you for some reason want desperately to believe. I think that is a religious doctrine, not physics or even philosophy, and you treat it like a religious doctrine. It is not a beginning point for investigation. There is no ineluctable truth contained that can be seen for certain, like there is with the simple statement _two is the successor of one_. 




> The first one measured chooses the state for both. Indeterminacy does not mean there is complete freedom of choice. I also find the non-individualistic implications interesting.


Mere words, my boy, mere words. You are demonstrating what happens when an individual (yourself) takes a philosophical model too seriously. You are against taking scientific and mathematical models too seriously, but apparently it is okay to do so with a philosophical one. Philosophical models (interpretations) are produced several levels of abstraction above the trench math and physics. They are quite simply loose interpretations and possibilities for the non technically inclined to consider of what _may be_ occurring. These "explanations" are then simplified further and finally transferred upward to books for the general public. It isn't over though. They make it to this forum about eight levels of abstraction up from the real thing. Here, what gets said over and over with the thinnest of support is that electrons make choices. I do not see where this is the basis for a philosophy that one would keep going at it endlessly. Precisely, it is a wild theory, not an ineluctable interpretation. 




> I don't know that anyone else claims that this quantum reality is conscious.


All you do is claim it. I gave you a chance to convince me. Why could quantum reality not merely *contain* some consciousness rather *be* conscious, as you keep stating?

I live and sense in a mammal-sized reality. That does not mean to me that mammal-sized reality is conscious, it means I am conscious within it. Some boulders are mammal-sized.




> What compels me to consider that there is agency at that level is the absence of determinism and the absence of complete randomness. This makes me think that within some limited range a choice is made. That choice implies some form of consciousness.


De-hynotize yourself, friend. Do not jump to the forking word choice at every opportunity. You have at best a vague suspicion, sir, a sneaking suspicion, as my mother used to say. ..._the absence of determinism and the absence of complete randomness_. With such statemnts you have hypnotized yourself by playing loosely with their meanings.There is random and there is non random. Random means you cannot make accurate predicitons of an outcome. The short and sweet of it...randomness is predictable only through luck, non randomness could be tracked down by a species with fine enough tools, i.e. there is a formula.

Randomness is a concept. The concept says there is no formula for this thing. If there does exist a formula for it, than it is not this thing. 

You are operating under false assumptions, according to your own criteria. Randomness is a pure abstaction, it may not exist. It is one of those concepts like continuity of the number line and infinite divisibility. No one even knows if these concepts apply all the way down in nature, or if nature operates discretely at a wee level. Heat does, from Planck.

Here you are applying these abstract mathematical models (as you like to call them) to reality nonchalantly and saying you can form beliefs out of them. Can the universe at any level express complete randomness? Maybe not. It is only a concept, so far. Perhaps the universe can only asymptotically approach pure randomness. Even in that case, there should be a formula for the universe.

I am not surprised if pure randomness does not exist. Asymptotric randomness might come as close as any limit we set, however, or it might have its own limit in this particular (or any) universe, like the speed of light does.

The newly coined concept of asymptotic randomness still allows you that thin edge of non randomness you seek. Just remember, any non randomness is determinate. This non randomness you will define as choice

If we imagine a bell curve, high in the middle and approaching the axis asymptotically in each direction, I know that to the left the universe can at least approach pure randomness extremely closely. 

Conversely, there is no obstruction abstractly at least to considering the other side of the graph where pure order is approached asymptotically. This might be the realm of heaven. It is an interesting notion but I am not forming any religious beliefs out of it myself.

Does more order, then, bring joy? There is plenty of order in a rest home and little joy. There has to be order and freedom.

----------


## desiresjab

No one should get in a tizzy over the statement that any degree of non randomness in our universal beginnings means theoretically our universe is reproducible with the right formula. For three centuries many great minds including Newton, Laplace and Poincare struggled with the mere three-body problem. The complexities of its families of solutions still dazzles is.

Imagine, then, trying to find a formula to derive the universe and its particles. Perhaps not impossible, but unimaginably close to impossible.

----------


## YesNo

> There is no ineluctable truth contained that can be seen for certain, like there is with the simple statement _two is the successor of one_.


One of our differences is that I don't view mathematics as more than a game. You seem to think there is more to it.




> De-hynotize yourself, friend. Do not jump to the forking word choice at every opportunity.


You'll have to do better than that to de-hypnotize me. 




> Randomness is a concept. The concept says there is no formula for this thing. If there does exist a formula for it, than it is not this thing.


The thing about determinism and randomness is that they are ways to avoid subjectivity and choice.

Since I don't see how we could even come up with the concepts of determinism or randomness without subjectivity and a choice to focus our attention on these concepts, our subjectivity and our ability to choose can't be reduced to these derivative concepts. 

That would be another place where we disagree.

----------


## desiresjab

> One of our differences is that I don't view mathematics as more than a game. You seem to think there is more to it.


You will call it a game, come tsunamis or asteroid hits, I know that. In that case I would at least like to hear form you that it is the game we found here, whose laws we did not invent but only formulated in more detail from what is abstractly necessary.

Which is it, my dear Yes/No? Did we wholly invent this game of mathematics, like we did chess, or did we find ineluctable rather than arbitrary rules were necessary even to do something as basic as counting properly in the universe we found ourselves in? You should be able to answer that without too much sophism about rules of a game.

1 The laws of Math are purely our inventions, just another game?

2 The laws of math are such that they must be as they are. We can discover and develop new byways in them but not invent the mechanics those laws rely upon. 

Take a crack at it.

----------


## YesNo

Let's consider the claim that "two follows one". I agree with you that that statement is true in every universe since none of those universes care one way or the other about it. Similarly, the game Tic-Tac-Toe is true in any universe. A limerick I write is a limerick in any universe. 

To get more specific, What does it mean to say that two follows one? If we are in the integers it means that there is a binary relationship, in this case a strict total ordering, containing the pair (2,1) requiring that 2 > 1.

We could easily switch that relationship around and say that "one follows two" by changing the binary order relationship to include (1,2) and not (2,1). In general if a > b then we make b > a. That is also true in every universe, because the existence of any of those universes is not dependent upon how we define that binary order relationship.

We can go further. Consider the finite field of two equivalence classes labelled 0 and 1. 0 contains all the integers that are even and 1 contains all the integers that are odd. In that field the statement "two, as an equivalence class, does not exist" is true in all universes. 

None of those universes care how we define the rules of the game we are playing at the moment.

The answer to 1 is that mathematics is a lot like a story or poem. It objectifies what we subjectively understand. That objectification I can call a "game" although it could be a "story" or a "limerick".

The answer to 2 is similar. When I write a limerick it conforms to a certain pattern or it is not a limerick. When I construct a mathematical structure, what I derive from it follows logical patterns or it is not a mathematical structure.

The missing question is does the universe behave like any of these mathematical structures (or stories or poems or physical theories) that we might happen to create? We hope it does, but we are led into errors when we take these objective artifacts of our subjectivity too literally. In spite of what some people like to believe, we cannot completely dump our subjectivity into something objective. 

There is another problem. When we do not have physical evidence for something we sometimes rely on these objectified, mathematical structures to guide us. This creates a blind spot. For example, is time in reality a mathematical continuum of infinitesimal points? We cannot physically verify infinitesimals given quantum limitations to the discrete. Furthermore, the assumption that time is a mathematical continuum leads to paradoxes in physical reality, notably Zeno's, even though the paradoxes might have been resolved mathematically.

----------


## desiresjab

> Let's consider the claim that "two follows one". I agree with you that that statement is true in every universe since none of those universes care one way or the other about it. Similarly, the game Tic-Tac-Toe is true in any universe. A limerick I write is a limerick in any universe.


Tic-Tac-Toe is true in any universe, but it is not much of a tool for exploring universes. Its laws, too, were always possible in any universe, but it is not fundamental to any of them 




> To get more specific, What does it mean to say that two follows one? If we are in the integers it means that there is a binary relationship, in this case a strict total ordering, containing the pair (2,1) requiring that 2 > 1.


This argument does not describe a universe, friend, nor does it have anything to do with the cardianl successor of 1. You ask what it means. It means two is the cardinal successor of 1. Your argument is totally irrelevant, a true red herring. 




> We could easily switch that relationship around and say that "one follows two" by changing the binary order relationship to include (1,2) and not (2,1).


And you think this has anything to do with cardinal successors? This is as relevant as looking at 18 and claiming 8 must be the successor of 1.




> In general if a > b then we make b > a. That is also true in every universe, because the existence of any of those universes is not dependent upon how we define that binary order relationship.


Again, sir, we are not talking about binary relationships. Notice you are not counting.




> We can go further. Consider the finite field of two equivalence classes labelled 0 and 1. 0 contains all the integers that are even and 1 contains all the integers that are odd. In that field the statement "two, as an equivalence class, does not exist" is true in all universes.


Successors, sir, fundamental counting--that's where we are. You are not counting in this example, you are blinking two values as in QR.




> None of those universes care how we define the rules of the game we are playing at the moment.


What makes you think so? They care a lot, if you want to put it that way. They care enough that they will not let you change any laws, you have to use the ones that preceded your arrival. The universe does not care about Tic-Tac-Toe because Tic-Tac-Toe is not integral to it.

A man made up Tic-Tac-Toe. Did a man make up the mechanics that causes 4n+1 numbers to behave differently than 4n+3 numbers in QR? A man saw that they behaved that way, a man did not make up that they behaved that way. They were behaving that way always, before Euler conjectured it and before Gauss proved it. A man did not make up the fact that numbers will behave this way. there is no other way possible for them to behave. Ineluctable: can be no other way.[/QUOTE]





> The answer to 1 is that mathematics is a lot like a story or poem. It objectifies what we subjectively understand. That objectification I can call a "game" although it could be a "story" or a "limerick".


Before, you said mathematics _was_ a stroy, now you say it is _a lot like a story or a poem_. Make up your mind.

You are really astray and I do not know if I can help you. You are cruising at a very high level of abstraction making judgements on the lowest level in the universe. Because a mailbox and a car are both red, at a particular level of abstraction they are alike. Because a peach and a baby are both soft, at a certain level of abstraction they are the same. As you descend to lower levels the differences betwen babies and peaches becomes clear. They are only superficially alike. You see there are a lot more differences than there are similarities between the two.

You are making high level abstractions to point out superficial commonalities.




> The answer to 2 is similar. When I write a limerick it conforms to a certain pattern or it is not a limerick. When I construct a mathematical structure, what I derive from it follows logical patterns or it is not a mathematical structure.


High level of abstraction to point out superficial similarity. 




> The missing question is does the universe behave like any of these mathematical structures (or stories or poems or physical theories) that we might happen to create? We hope it does...


Which part of the universe are you talking about? Again, youapparently only mean cosmological or quantum scales. Here at mammal-scale, math works just fine to capture order and help us make better decision by the millions everyday. 




> There is another problem. When we do not have physical evidence for something we sometimes rely on these objectified, mathematical structures to guide us. This creates a blind spot. For example, is time in reality a mathematical continuum of infinitesimal points? We cannot physically verify infinitesimals given quantum limitations to the discrete. Furthermore, the assumption that time is a mathematical continuum leads to paradoxes in physical reality, notably Zeno's, even though the paradoxes might have been resolved mathematically.


You are like a madman that only knows certain things to repeat. There is no blind spot created by math. Infinitesimals cannot be verified. What are you going to do, find a smallest one?

Most of the mechanics of the universe were a blind spot to early man. Math reduces and illuminates them, it does not create them.Your problem is you are disappointed the lighting is not perfect. You are mad because when the light of mathematics is shined on the universe it does not show everything with 100% resolution. We have to know more laws than we do now. When those laws of numbers are found, some of them may apply to your obsession. Until then, hope for a great mathematician to arise, because the solution if it comes will be in mathematics, it will not come from people with prayer books. The people with prayer books are not even searching for the same kind of answer. I would consider it a miracle if people praying were ever of instrumental use in science.

----------


## YesNo

> This argument does not describe a universe, friend, nor does it have anything to do with the cardianl successor of 1. You ask what it means. It means two is the cardinal successor of 1.


Where is the number 1 in the universe?

----------


## desiresjab

It is not anyplace.

----------


## YesNo

I see. It is a game we created. My cat doesn't play the game (to my knowledge). I don't even think my computer really _knows_ what the number 1 is.

----------


## desiresjab

> I see. It is a game we created. My cat doesn't play the game (to my knowledge). I don't even think my computer really _knows_ what the number 1 is.


Am I still banned?

----------


## desiresjab

I don't know if you got the question out just right. But since it is the big question in all of this, I have to address it.

If we start with 1, it would not hurt to say what 1 is, right? I think that is what you meant. 

Words like "unity" or "singularity" only stall the answer. They are not what a person who asks the question means to get at. Those two words already contain the notion of oneness, so we cannot use them to define oneness, can we now? I mean really.

Everything else is defined neatly in terms of 1. But what is 1? That is it, isn't it? Here is where it gets thorny philosohically.

Everything after 1 is mathematically defined, but is 1 mathematically defined?

Even though there is a mathematical operation 0+1=1, that would be getting ahead of ourselves.

The idea of 1 seems to be philosophical. Translated to counting, it can only be existence itself, as opposed to no existence. 1 and 0 are existence and non existence, mathematically. Ah! but that is one level of abstraction up from the abstract concept of existence itself, isn't it? I just learned something. 

Further, the reason mathematicians did not start at 0 to define numbers is that they would then would have to explain existence itself when 1 appeared from nowhere.

1 is the assumption of existence, which is the foundational assumption of mathematics, maybe of philosophy too. Once we do that, everything follows nicely, in math at least, and we can even go back at that point and say, yes, 0+1=1.

Unless you must call the assumption of existence a game, where is the game? We conceive of nothingness, observe we exist, and further observe there is a proliferation of things that exist. If you want to enumerate things that exist accurately you must use numbers to count them. Each number is the successor of the number in front of it, exceeding it by exactly one, the unit already defined and brought out of nothingness to represent existence. One is one of anything—a car, a house, a dumpling, an idea.

We know we exist. I myself believe we are not what we think we are by a long shot, but in one way or another we and other things do exist. We have every right to assume that we exist and that other things exist as well.

We invented symbols for existence and non existence, 1 and 0. Man was well along in the civilization process before the idea of 0 came and stuck.

The truth is, historically mankind started with one, when it came to counting, since it was a reasonable assumption that things existed. It was a long time later when they got around to explaining how zero worked in every situation. Very little in math ever came easy after counting. Counting was pretty natural, in the sense that enumeration could easily be verified with the senses, but counting still took a long while to develop anyway.

I do not see counting as a game, nor the assumption of existence as making a rule. I do not see keeping track of enmueration a game.

Math can be used like a game, one can think creatively and inventively with it, but the fundamental propositions of mathematics are no game, they came right out of counting what was already there using number successors before anyone ever bothered with the concept of number successors.

----------


## desiresjab

It is OK to view nature as your opponent to be mastered, but you would not want to mistake _any_ model for the actual, even the game model for mathemathics, right?

Math can be fun like a game. I want it to be. It is to many.

Even non commutivity in multiplication came about to satisfy multiplication of matrices and groups, not as an arbitrary experiment with operations. Matrices are shadowed by much in nature and our daily lives. Matrices are a brand of mathematics that does not count things, but the magnitudes of the numbers in the columns and rows of the matrix are interpreted exactly as in counting. 26 is still 26.

It is all right to approach mathematics like a game when you do math research. It should be fun. Satisfying your curiosity is a fun thing. 

In order for matrices to reflect the way certain things happen, one operation (multiplication) had to be tweaked. Galois was following nature, maybe looking for it.

Since the whole dispute in my mind now comes down to the number 1 and how it got there, I can more easily make my points.

----------


## YesNo

I keep getting those "you are banned" messages when trying to post as it looks like you do as well.

If our subjectivity is not a game, then neither is mathematics. I would also elevate poetry to being something more than a game.

However, what comes out of our subjectivity is partial. We cannot take it too literally.

----------


## desiresjab

> I keep getting those "you are banned" messages when trying to post as it looks like you do as well.
> 
> If our subjectivity is not a game, then neither is mathematics. I would also elevate poetry to being something more than a game.
> 
> However, what comes out of our subjectivity is partial. We cannot take it too literally.


Yeah. I consider normal language much more complex than mathematics. Programming Big Blue to beat Kasparov in chess was fantastic, but the far more formidable task was programming it to beat Ken Jennings at jeopardy. The procedures of chess are rather mathematical at heart, jeopardy is not. I believe Big Blue was not allowed to read the questions--for that would have happened instantaneously, but had to hear them and understand them. Big Blue had to understand all the puns and allusions in the typical jeopardy question. This is much closer to understanding poetry than it is math.

Connotation and suggestion is so complex. The same images will not form in our minds as we read Shakespeare, the same thoughts. Words do not have equal signs between them, even synonyms do not. Every word is different from every other. The same word will have different connotations in a different setting. This not true of the number 6.

----------


## YesNo

The number 6 can mean all kinds of things to us. It can be a composite number. It can be one-third of the "beast" 666. It can suggest a six-pack of whatever we want at the moment.

----------


## Dreamwoven

> I keep getting those "you are banned" messages when trying to post as it looks like you do as well.


This is weird, that both of you have got such banned messages

----------


## YesNo

Although I probably deserve it for all my hell-bent sins, it seems that we get them when we use special characters to format math concepts in a post and the software thinks we are using a browser that does not allow ads to display.

----------


## Dreamwoven

What is a special character? Like Chinese? It would be nice to be rid of ads, if that is possible.

----------


## YesNo

Like the symbols above the numbers on the keyboard. I figured it is best not to touch them at least when there are numbers next to them.

As far as ads go, I usually don't mind them. I have been known to click on one or two. They just have to display rapidly.

----------


## desiresjab

> Like the symbols above the numbers on the keyboard. I figured it is best not to touch them at least when there are numbers next to them.
> 
> As far as ads go, I usually don't mind them. I have been known to click on one or two. They just have to display rapidly.


I am pretty sure that is not it. The site was recently restructing some stuff.

----------


## desiresjab

Yes/No, even now I ponder quadratic reciprocity everyday. I love the concreteness of it compared to abstact philosophical talk.

Like I said before, I am dumb, so it takes me a long time to see things.

But by putting a little bit next to a little bit, I have finally seen what I wanted to see, not more than five minutes ago for the first time.

No abstract algebra or group theory needed, just a minute inspection of the details of Eisenstein's proof.

One can see and understand almost all the details of Eisenstein's proof without understanding why it proves QR.

I had already figured out that the dimensions of the rectangle represented the scale of the relative sizes of the moduli working aginst each other in QR. What I had not put into words was that this representation of scale is only activated by the diagonal. Hold that thought.

* * * * *

The other thing is a clear concept of just what the multiplication 

[(p-1)/2] [(q-1/2)] stands for. What does it stand for? First, each is the number of respective quadratic residues of the primes individually.

This multiplication stands for _any one of 5 things combined with anyone of 3 things_. In other words, it counts how many ways the number of quadratic residues of each prime can be combined with one another, fifteen ways, in this case.

Each combination has its chance. Each combination is a lattice point. The diagonal slices WAXY one more time, dividing the number of lattice points a final time. If it is slicing through an odd number of lattice points, triangles WAY and YAX are forced to different parities. This only happens if they are both 4n+3 primes.

The diagonal expresses the gear size of each prime. Set with the bigger prime as width rather than height, I can get many more lattice points in WAY than YAX with a large enough size discrepancy between primes. I am pretty sure of that conjecture. It is the polar opposite of my other conjecture.

Yes, p and q are the individual gears, but the diagonal is them meshed together.

----------


## desiresjab

We start with a p by q rectangle. We fill in the lattice points. We divide the rectangle by two vertically, and divide it by two again horizontally.

Only now do we divide it by two once more with the diagonal, allowing the diagonal to be "last agent," as you might prefer.

With the construction of the rectangle, the gears sizes are set. With the construction of the diagonal, the gears are meshed together and running.

Each gear is a period, a modular cycle of remainders. When you combine two periods, you get a larger period, like a period of 77 for pq, before everything is back to where it started. The original marks on the two gears will again be aligned vertically with a stationary reference point.

The upper triangle WAY can "hog" lattice points, because the extreme "lean" of the diagonal forces lattice points into the upper triangle WAY in the left hand lower corner of the rectangle, but the best the lower triangle YAX can ever do is break even.

----------


## YesNo

> No abstract algebra or group theory needed, just a minute inspection of the details of Eisenstein's proof.


I agree that a better understanding should not need those tools. They help to generalize and perhaps prove results.




> This multiplication stands for _any one of 5 things combined with anyone of 3 things_. In other words, it counts how many ways the number of quadratic residues of each prime can be combined with one another, fifteen ways, in this case.


Do you have an example? I don't follow the 5 and 3 things.




> Each combination has its chance. Each combination is a lattice point. The diagonal slices WAXY one more time, dividing the number of lattice points a final time. If it is slicing through an odd number of lattice points, triangles WAY and YAX are forced to different parities. This only happens if they are both 4n+3 primes.


I agree. You will only get an odd number if both primes have remainder 3 modulo 4.




> The diagonal expresses the gear size of each prime. Set with the bigger prime as width rather than height, I can get many more lattice points in WAY than YAX with a large enough size discrepancy between primes. I am pretty sure of that conjecture. It is the polar opposite of my other conjecture.


Is there an upper bound on this discrepancy?




> Yes, p and q are the individual gears, but the diagonal is them meshed together.


"Gears" is a nice metaphor. I had not thought of it like that before.

----------


## desiresjab

> I agree that a better understanding should not need those tools. They help to generalize and perhaps prove results.
> 
> 
> 
> Do you have an example? I don't follow the 5 and 3 things.
> 
> 
> 
> I agree. You will only get an odd number if both primes have remainder 3 modulo 4.
> ...


5 and 3 are (p-1)/2 and (q-1)/2 when p=7 and q=11. The simple multiplication is counting the ways three objects can combine with five objects in pairs. There are fifteen different pairs representing how p can pair with q and vice versa. At this point the ground level mechanics are gone and we are looking for something else. We only need to keep our tether line connected to where we started from so we can remember where we are.

----------


## YesNo

I see. These numbers will change depending on the primes involved.

----------


## desiresjab

Duh, I must be slow. I have to admit, I either forgot or never realized that a simple multiplication represents how many ways the objects from two sets can be *paired*. 

In QR I think it is important that (p-1)/2(q-1)/2 represents that, not just some normal product as we usually think of a multiplication. Basic multiplications are combinatorial, if you enlarge your viewpoint slightly. It gives something deeper to explore. If I can reverse map each of the fifteen lattice points... another revelation might be near. Ahem! A mirage is likely, too.

----------


## desiresjab

At least 227 proofs of QR are known. Even the few I know of use a staggering array of techniques and math. There are proofs emanting from:

1 Modular Arithemtic
2 The Pythagorean Theorem
3 Abstract Algebra
4 Group Theory
5 Geometry
6 Combinatorics
7 Trigonometry
8 The Binomial Theorem
9 Class Field Theory
10 Calculus (real anaysis)
11 Calculus (complex analysis)
12 Euclidian Algorithm
13 The Chinese Remainder Theorem
14 Vectors

Additional fields or functions that I suspect proofs emanate from:

1 Euler's Totient Function
2 Game Theory
3 The Divisor Function
4 Eliptic curves
5 Modular Functions
6 Discrete Logarithms
7 Primitive Roots
8 Fermat's Little Theorem
9 Statistics

Each of the fields probably has produced numerous proofs with slightly different twists. QR is so centrally connected, as I keep mentioning, or these diverse fields would not all have relations with it.

----------


## desiresjab

Of course each lattice point only represents _ any old pair of quadratic residues_ (from anything that is said in the proof). My experimental idea is to replace each lattice point in WAXY with a specific pair of residues There may be a revealing way of matching each particular lattice point to every specific residue pair. How to match them is an intriguing question, which I am hoping will later become obvious, because that would mean there is a superior way of mapping point to pair. I think I will carry out the number crunching. More later.

----------


## YesNo

Maybe you need to study one of the other QR proofs to help stimulate new ideas?

----------


## desiresjab

The two sets of quadratic residues for p and q are: {1 4 2} {1 4 9 5 3}.

The first set actually represents the height of the rectangle, though p in the final result is represented by the lower half of WAXY, and q the upper half, and q represents the width. 

(1, 1) is the first pair, and we will take those to be coordinates. That one is easy to assign a point. (1, 4) is next, and we take those to be coodinates, too, and so on and so forth.

* * * *
(1, 1) (1, 4) (1, 9) (1, 5) (1, 3) 

(4, 1) (4, 4) (4, 9) (4, 5) (4, 3)
* * * *
(2, 1) (2, 4) (2, 9) (2, 5) (2, 3)

Only those pairs with an asterik have unique coordinates in WAXY, the other pairs simply reduce to duplicates. See a pattern? Remember, the first coordinate is reduced (mod 3), and the second is reduced (mod 5). And do not forget!! our first coordinates above are vertical coordinates, and the second coordinates are the horizontal. That is the opposite of what we are all used to from algebra where the horizontal x-coordinate is always the first element in the ordered pair and the vertical y-coordinate is the second element. 

If I reversed the order of the digits in the ordered pairs, we still get eight ordered pairs with asteriks, as long as I reverse the moduli too.

Were eight asteriks a coincidence above, or will there always be exactly the same number of pairs with natural coordinates as there are lattice points in one of the triangles of WAXY?

Only more grinding will know.

----------


## desiresjab

> Maybe you need to study one of the other QR proofs to help stimulate new ideas?


This is a new idea, lad. It takes quite a lot of labor to investigate one idea that is already deep in, as you can see from my last post, which dealt with only one pair of primes, 7 and 11. I am so scattered around I cannot do it all in one day. But an amateur is having fun.

----------


## YesNo

> Were eight asteriks a coincidence above, or will there always be exactly the same number of pairs with natural coordinates as there are lattice points in one of the triangles of WAXY?


I like how you paired the residues. I hadn't thought of them in that way before.

----------


## desiresjab

> I like how you paired the residues. I hadn't thought of them in that way before.


I hadn't either. Last night I did more numerical experiments. Those pairs that work directly as coordinates within WAXY without being reduced, and those that do not, always seem to partition the two halves indentically as the diagonal does, but I can no longer conjecture the larger half will always be the one expressible as coordinates. For 5 and 11, WAY and YAX contained 6 and 4 lattice points respectively, but only four expressions for lattice points out of the ten total. There goes that one. The expression went with neither the larger half nor the one on top. At least we know the one on top will always have as many or more points than the bottom, since it wil always get the point (1, 1). Just the lean of the diagonal because of our convention of always making the short side the vertical dimension gets that done. Because we can remember a few conventions and have wiki-pejia access with Eisenstein's rectangle, we can communicate more easily. Tug lines are good in deep water.

More might come of investigating residue pairs further. It could also be a deadend that dispalys a lot of not unlikely connections without helping to get deeper into the process. Both are true of any investigation, however, and since I do not see any other method that might possibly lead ahead right now, I will roll it around for a while, as usual, without doing any more work.

If I can figure out what the number of natural expressions represents, that might be used as another means of going deeper into the machine. 

I suppose the thing to look at now is the distribution of the "natural points" within WAXY, besides (1, 1), which is always predictable, and see if they tell me anything new or are simple reflections of ideas I already know.

One thing jumps out--the pairs that make it will be 1-heavy, with both coordinates small. The 1 column and the 1 row should be more full than the others.

----------


## YesNo

The residue pairs seem like an interesting approach. I don't know enough about the topic to know if anyone else has looked into this. It may lead to some other unexpected results.

----------


## desiresjab

> The residue pairs seem like an interesting approach. I don't know enough about the topic to know if anyone else has looked into this. It may lead to some other unexpected results.


There are many experts right now who could tell us exactly where pairing residues leads. I never even pretend I might out think them or dream up an approach no one else has tried. The smartest people in math are simply too smart. Questions we have to dig in for some numerical on, they solve accurately in their visions. It is probable that if this leads anywhere we will end up in the territory of other maths we know nothing about, speaking in our usual language.

All the "unnatural pairs" pairs we generate with our multiplication are in reality duplicates of the natural pairs. I feel strongly that the natural pairs have some other quality not possessed by the pairs that have no representation at all. I could almost put it into words right now, but they would be only words with no mathematical help yet. It is a matter of plugging both sets in somewhere and seeing the difference in their behavior.

Now wouldn't it be a shade of wonderful if it was something as simple as they are divided according to those which are residues two ways and those which are quadratic residues only one way? Now I just have to decide how I am going to decode it. But how does that work if the division is something like 6 and 4, as it is with primes 5 and 11? I am learning not to box myself in with conjectures.

For now we can feel confident assuming we know the pairs will always partition out the same, through the coordinates or through the diagonal division, but are unable to yet say why or how. Euler might be giggling at us right now.

* * * * *

People might wonder how we can spend so much time on quadratic reciprocity without being even dumber than we admit to, when college juniors and seniors take classes in number theory where it is presented and pass their tests. It was just one more class in a long string of difficult classes for college students majoring in math or science.

Do they really understand it? They pass the tests, I believe we would pass them, too. Furthermore, we would amaze many, probably including the professor, with how thoroughly we understood many aspects of the theory.

I took calculus and differential equations, and I did better than merely pass those classes. What you learn in such classes is how to operate the formulae and which ones to use when. Normal college classes are not about in depth looks at particular problems, but learning each new language and how to operate in it. None of them go away with a greater understanding than we have right now.

As a graduate student in math they might confront this problem full on in a class, once they have had both group theory and abstract algebra. Those proofs are from the eagle's perspective, many levels above our ground level and subterranean approaches. Their view tells them something like the crankshaft will not turn, at any rate, if the carbeurator is disconnected. It does not take them down in the engine where the numbers are. At best, they learn how certain classes or fields behave, which is where we may end up yet.

The 38 lectures I watched on abstract algebra did not prove QR. A few times it seemed like they were getting close to the same ideas, though. They did prove quite a few other propositions, however, including Fermat's little theorem. The thing that knocks you off your feet is how brief the proofs are. A few sweeps of the chalk and they are done. That is how high they are soaring.

----------


## desiresjab

* * * *
(1, 1) (1, 4) (1, 9) (1, 5) (1, 3) 

(4, 1) (4, 4) (4, 9) (4, 5) (4, 3)
* * * *
(2, 1) (2, 4) (2, 9) (2, 5) (2, 3)


Here are the quadratic pairs for 7 and 11. On this word processor, the asteriks may not line up where I want them. They didn't. The point is, if you take the moduli back to three and five, those fifteen points reduce back to only eight points. Only eight points are represented above. There are more lattice points in WAXY, that have no representation at all. Let us track down the other seven.

(1, 2) (2, 2) (3, 3) (3, 2) (4, 2) (3, 1) (3, 4)

the point (4, 2) is not actually in WAXY. It is too large. Reduced all the way it is merely the point (1, 2)

Wait again. (3, 5) is a point reperesented nowhere. (4, 2) can be reduced by the smaller moduli, (3, 5), cannot, except to (0, 0), which mamma don't allow none of around here. 

(3, 5) must replace (4, 2) in the list of pairs above. Here it is:

(1, 2) (2, 2) (3, 3) (3, 2) (3, 5) (3, 1) (3, 4)

We have their names, now what can we investigate or interpret?

What can they not do that the others can?

----------


## desiresjab

* * * *
(1, 1) (1, 4) (1, 9) (1, 5) (1, 3) 

(4, 1) (4, 4) (4, 9) (4, 5) (4, 3)
* * * *
(2, 1) (2, 4) (2, 9) (2, 5) (2, 3)

Only eight distinct points are represented above, if you use the reduced moduli. All are valid, they just happen to lie outside of WAXY on the coodinate system, though of course they are within the larger ABCD, to keep things in perspective. Magically, they all reduce back to natural pairs already listed within the matrix. There are more lattice points in WAXY. Let us track down the other seven.

(1, 2) (2, 2) (3, 3) (3, 2) (3, 5) (3, 1) (3, 4) 

Here are the two residue sets again. {1 4 2} {1 4 9 5 3}.

We intuitively understand why the point (1, 1) is represented. Can we understand why a point like (1, 2) above is not? We want to understand it in a better way than just that the two sets when combined in the order shown, cannot produce that pair. What do we need to try?

For (1, 2)
1 is a residue of 7, but 2 is not a residue of 11. That is a start. Keep going.
For (2, 2)
2 is a residue of 7, but 2 is not a residue of 11.
For (3, 3)
3 is not a residue of 7, but 3 is a residue of 11
For (3, 2)
3 is not a residue of 7, and 2 is not a residue of 11
For (3, 5)
3 is not a residue of 7, but 5 is a residue of 11 
For (3, 1)
3 is not a residue of 7, but 1 is a residue of 11
For (3, 4)
3 is not a residue of 7, but 4 is a residue of 11

Remember, these are pairs which are coordinates, but were not generated in our combinatorial multiplication. There is at least one-way rejection, for the entire list. But for the matrix of pairs above with fifteen pairs, both never reject, all the time. Even the pairs that lie outside of WAXY are valid, they are just not inside WAXY, which is true for some of the quadratic residue pairs generated in the multiplication. Since some of the valid pairs generated in the combinatorial multiplication lie outside of WAXY, we can only expect some of the points inside WAXY to be valid residue pairs. In fact, eight of them are, and seven of them are not. This accords exactly with the split of the diagonal, and the split of the natural expressions. 

Those lattice points within WAXY that aren't two-way accepting, fill the second column and fifth rows, exclusively, forming a right-heavy bar on the capital T of its shape. This puts 5 of the 7 points in WAY, but 2 of them in YAX. Those seven are the same ones we said earlier had no “Natural expression.” Natural expressions accompany pairs of two-way acceptance only. We wonder on the side if only one is usual.

For primes 7 and 11 and the rectangle WAXY, eight of the lattice point coordinates (the same ones with natural expressions) do indeed have two-way acceptance pairs as coordinates, and seven have at least one rejection. A perhaps complexifying aspect here is that one of those seven pairs (3, 2) is two-way rejecting.

When you represent the pairs as coordinates when Eisenstein's rectangle ABCD is on coordinates, then eight of the fifteen pairs of coordinates within WAXY are coordinates where both elements are residues of their respective prime in the ordered pair. Those are exactly the same pairs that have natural expressions that were generated in the combinatorial multiplication. Six pairs of coordinates have only one quadratic residue, and one pair (3, 2), has none, i.e. 3 is not a residue of 7 and 2 is not a residue of 11. Graph-wise, (3, 2) is where the leg of the T and its bar intersect and occupy the same lattice point.

Somehow, the diagonal and the value of the number of natural expressions, manage to cut WAXY correctly in terms of two numbers to be used as exponents, innocent lattice points, and actual coordinates. The diagonal does not cut all the naturals to one side, but gets the numerical partition correct. The T marks the exact positions. That coordinates even apply is great!

7 and 11, being a pair of 4n+3 primes, are never mutal hosts to one another, but some of the pairs generated are, in a sense. We generated the pairs the old fashioned way—through a multiplication process more basic than the one taught in grade school, and found out that 15 is the number of distinct pairs generated from two sets of 3 and 5 members, respectively. Eight of these pairs are mutally accepting pairs, and seven of them are not.

Now it's back to the think tank.

----------


## YesNo

As you mentioned math classes are more to show students how to use the language and solve some problems. Generalizing does take one away from the details.

I was looking at the Sierpinski problem recently since that is the one that I have a computer working on for PrimeGrid. I am trying to see if I can get Python and MySQL to help generate covering congruences for some numbers that are unknown whether they are Sierpinski numbers or not. 

A Sierpinski number is a number of the form k 2n + 1 where k is odd. A Sierpinski number is a k such that for all n none of the numbers are prime.

----------


## desiresjab

Paul Erdos made some discoveries concerning covering numbers. The Sierpinski gasket is a fractal object.

----------


## desiresjab

5 and 17 are a strange pair. They are both 4n+1. Phi/4 is 16, so there are plenty of factors of two left. Yet when the diagonal divides them it partitions them to 9 and 7. Since they are mutually rejective with that many factors of two left, that is all the diagonal could do. 

Sometimes the diagonal is forced to parttion an odd number into two odd ones, as in 7 and 11, when there are only two factors of two (the minimum), or break an even one into two odds of different value, such as for 5 and 11, where there are only three factors of 2. But this is the first time I have seen a highly even number partitioned unevenly. The diagonal always gets it right. I have not had time to check the "natural" pairs for for 5 and 17 yet to see what they say. There are only 16 of them, so it will not be difficult.

----------


## desiresjab

> 5 and 17 are a strange pair. They are both 4n+1. Phi/4 is 16, so there are plenty of factors of two left. Yet when the diagonal divides them it partitions them to 9 and 7. Since they are mutually rejective with that many factors of two left, that is all the diagonal could do. 
> 
> Sometimes the diagonal is forced to parttion an odd number into two odd ones, as in 7 and 11, when there are only two factors of two (the minimum), or break an even one into two odds of different value, such as for 5 and 11, where there are only three factors of 2. But this is the first time I have seen a highly even number partitioned unevenly. The diagonal always gets it right. I have not had time to check the "natural" pairs for for 5 and 17 yet to see what they say. There are only 16 of them, so it will not be difficult.


I did check those sixteen pairs of coordinates, and any conjecture crashes between the number of naturally expressible coordinates and the partitioning of lattice points. Those coordinates are merely the ones found strictly within WAXY. Only four coordinates out of sixteen are of this type for the two primes 5 and 17, and we know that number does not correspond to either value of the partition.

*With certainty*, we know that the diagonal for two 4n+3 primes will "thread," through an odd number of lattice points to partition them into one or other of WAY or YAX in unequal odd numbers.

*With certainty*, we know that a 4n+3 prime and a 4n+1 prime of lowest evenness for its kind (only divisible by 2 twice), will "thread" through an even number of points, partitioning them into odd halves which may be either equal or unequal, as far as we know.

*With certainty*, we know the two situations above were forced by a limited number of factors of 2. 

In the Eisenstein rectangle lattice point graph for the two primes 5 and 17, the "lean" of the diagonal steals away three lattice points on the bottom row, but two of them get made up somewhere.

Whether or not the values of WAY and YAX can ever differ by more than two, becomes an interesting question in itself.

----------


## YesNo

You might try constructing proofs of the certain items if for no other reason than to get a foundation for future results. There is a theory of lattice points that I am unfamiliar with that might be a place to start. 

I thought you found examples where the WAY and YAX differed by more than two lattice points. Perhaps not.

I'm working on a problem at the moment and trying to get Python to generate some examples or a solution. The claim is that the sum over n starting with 1 of (n-1)n is never prime. So the goal is to find a prime in that sequence of integers or find a covering congruence to show that the sequence has no primes.

I'll be happy if I can get a script to generate composites of this form up to n = 1000. I think I have a workable algorithm for that, but I don't even have a way to show that a covering set of primes actually covers all of the numbers in the sequence.

----------


## desiresjab

> You might try constructing proofs of the certain items if for no other reason than to get a foundation for future results. There is a theory of lattice points that I am unfamiliar with that might be a place to start. 
> 
> I thought you found examples where the WAY and YAX differed by more than two lattice points. Perhaps not.
> 
> I'm working on a problem at the moment and trying to get Python to generate some examples or a solution. The claim is that the sum over n starting with 1 of (n-1)n is never prime. So the goal is to find a prime in that sequence of integers or find a covering congruence to show that the sequence has no primes.
> 
> I'll be happy if I can get a script to generate composites of this form up to n = 1000. I think I have a workable algorithm for that, but I don't even have a way to show that a covering set of primes actually covers all of the numbers in the sequence.


I may have stated incorrectly once I had an example. I believe I have no examples of WAY and YAX with a difference of more than two lattice points. These problems get huge to generate by hand with relatively small primes. I have no mathematical software to assist.

Your problem sounds interesting, and either has an echo or a false echo of Fermat.

It is a snap to show that (p-1)(q-1)/4 is equivalent to Euler's totient function. Pretend that these two primes are really, really, really huge. We can always tell their types, but ascertaining whether one is a quadratic residue of the other may be next to impossible by hand. What can we do?

We also pretend we have a computer capable of multiplying (p-1)(q-1), in fact we will need one. Trusty division by four is our next step. Now we have something to look at. We can tell the species of this quotient, too.

If the quotient is already an odd number, we know the diagonal will produce an even and an odd number. We must have been dealing with two 4n+3 primes.

If the quotient is a 4n+3 number, at least the possibility if not the certainty of (1)(1) is preserved, though I see no way yet to determine if it is that or (-1)(-1) for very huge numbers.

----------


## desiresjab

Interestingly enough, suppose one intended a proof of quadratic reciprocity derived from the totient function. This would only work for odd primes. We have to use the formula (p-1)(q-1), where one or the other of the expressions is (2-1). A diagonal of the Eisenstein diagram of these dimensions will not make numerical sense.

The totient function is only a shortcut for odd primes, but what a shortcut it is.

----------


## YesNo

> I may have stated incorrectly once I had an example. I believe I have no examples of WAY and YAX with a difference of more than two lattice points. These problems get huge to generate by hand with relatively small primes. I have no mathematical software to assist.


If you can use a spreadsheet you can check some of these for low numbers. I use Google sheets since it is convenient, in a cloud storage and free. Here's a link to a Google sheet I made some time ago about your conjecture: https://docs.google.com/spreadsheets...it?usp=sharing

It looks like 5 and 23 have a difference of 4. (Of course I might have constructed the sheet incorrectly.)

There are four tabs on the spreadsheet. On the Configuration tab there are entries for "Prime A" and "Prime B". Those are the only values to change. On the Lattice Points tab is a graph of how the lattice points look. The fractions represent deviations from 0 or the diagonal. The Twin Primes tab is a list of tests for twin primes. It is not fully filled out. The References tab are places I looked for lattice point information. The sheet is limited to numbers under 100.

You should be able to copy the spreadsheet from the link and modify that copy should you want to use something like this. You may need a Google account to set up Google drive if you don't have this already.




> Your problem sounds interesting, and either has an echo or a false echo of Fermat.


I am planning on using Fermat's Little Theorem to simplify the calculations in Python. Basically I would be using ap-1 = 1 (mod p) for prime p. I have this set up on a Google sheet, but these easily get past the max size of integers on a spreadsheet. So I have to use something like Python. That's a free tool you also might find useful. I am still feeling my way around it, but I have programmed in many different languages for decades.

----------


## desiresjab

I did it twice by hand, trying to make sure. My rough graphs by hand are not ultimate arbiters, but it appears you may be right. Yet several lattice points are really hard to judge by eye. If we judge there to be 9 and 13 points respectively, and are wrong on one lattice point, that the brings the totals to a respectable 10 and 12, except we know that is wrong--neither one of these is the quadratic residue of the other, by the application of easy properties.

However, if we judge there to be 8 and 14 points respectively, being off by one point transfer would make both values negative again, and fulfill that needed condition.

Since you are using software, I tend to go with your results as more definitive in the judging-by-eye department.

----------


## YesNo

I tried it by hand initially, but then I realized I was making too many mistakes. I think the spreadsheet is correct, but I'm not sure. One of the problems with software is that it has many more pieces to check than a proof.

----------


## desiresjab

> I tried it by hand initially, but then I realized I was making too many mistakes. I think the spreadsheet is correct, but I'm not sure. One of the problems with software is that it has many more pieces to check than a proof.


That is why we are still down in the engine room, looking for the fundamental mechanical principle of mere numbers everything relies on and quadratic reciprocity expresses. Sometimes investigations turn out to be superflous to solving the problem, but add to one's knowledge base. I would say that is the rule rather than the exception. We know now that the principle is not related to the number of lattice points expressible in WAXY through primitive combinatorial multiplication to generate coordinates. That was an important thing to get out of the way, once the idea came up.

Have the masters really captured everything there is know of QR from the ground level view? I cannot allay the suspicion that the mechaical principle is visible through all the gears, wires and steaming valves, if one stands exactly in the right place and bends down just so with a crane of the neck and peers through the complexity at the cause of it all.

Time was I was sure I had it, and up to a point I did. Once the combined totals of factors of 2 in (p-1) and (q-1) reach 24, however, the outcome of the diagonal split of this even number of lattice points in WAXY, cannot be predicted, though one has other rules, properties and laws to consult to usually clear up what the mutual reciprocity is, which is what one is usualy after.

We were looking at the number of points in WAXY and their coordinate names as a side issue. I no longer know if it is relevant.

The answer I am looking for, and the way I am looking for it, make a fantastically difficult yet solvable problem, I suspect. This is what I wanted. I see no need to move on. I keep learning more. The engine room is fine.

----------


## YesNo

I've started using pari/gp for calculations needing multi-precision arithmetic. It is also an algebra package primarily for number theory. It allows you to work with matrices and you could probably implement the WAXY pattern for integers larger than what the Google sheet allowed.

The gp part is a calculator and the pari part is a C library. I am more interested in the calculator. There are free C compilers available which could use the libpari library, but I'll give Python a chance first although I suspect pari might be faster. Here is the location of pari/gp: http://pari.math.u-bordeaux.fr/download.html

----------


## YesNo

Since Moffat's gravity theory made a prediction about gravity waves from the big bang which is different from what the Newton-Einstein theory would predict, I was looking at LIGO which showed the existence of gravitational waves last year. I am not sure what Moffat's prediction is, but at https://losc.ligo.org/about/ there is a tutorial about LIGO's recent findings with an interactive Jupyter notebook allowing you to play around with the data.

There are other alternative gravity theories. Moffat discusses them in "Reinventing Gravity". One of the benefits of a modified gravity is that dark matter would not be necessary and black hole singularities could be eliminated. 

The inability to find dark matter and the observations of the rotational speed of galaxies are evidence that the Newton-Einstein theory of gravity is incorrect and needs modification. The observed movements of galaxies falsifies the Einstein gravitation theory without dark matter. 

But all of these are theories or models of the universe. Even Moffat's theory is only a map. It is not the reality. Sometimes it is hard to keep the map and reality separate since the only way we can make sense out of reality is through a map.

----------


## YesNo

When Moffat discussed other alternative graitational theories and the problems in them, he wrote this regarding "quantum gravity", an attempt to combine gravitational theory with quantum theory: (John W. Moffat, "Reinventing Gravity", Harper Collins, 2008, page 142)

_Some theorists simply claim that since gravity is observed and quantum mechanical effects are also observed that qualifies as enough experimental evidence that a quantum gravity theory is necessary._
The implication is that quantum gravity is not necessary. The problem with getting quantum gravity to work is to take any theory that works on the quantum level and getting predictable results that match what is observed about gravitation on the cosmic level.

If one doesn't need quantum gravity then there is no need for a "graviton", a quantum particle of force associated with gravitation like the photon is associated with electromagnetic radiation which so far has not been detected.

----------


## desiresjab

What results if any to report of Moffat's attempt at a time machine? I know he proposes a coil of lasers to produce warpage, where only light is involved in this bending instead of massive objects. He expects to find particles (marked somehow, I assume) when he performs his early experiments that he has already sent back in time to himself, which is mind bending. A single particle is what he is trying to send back or ahead, for now.

----------


## desiresjab

Whoops! Excuse me, please. I got names mixed up. The guy I was thinking of is Ron Mallett.

----------


## YesNo

I remember reading something by Ron Mallett regarding time travel some years ago, but I don't think time travel is possible. That would violate the second law of thermodynamics where we can only go from low entropy to high entropy, from past to future. By the way, I haven't run into any time travelers.

John Moffat has an interesting thing about time at t = 0 ("big bang") which he does not consider to be a singularity. He assumes there are two universes one going into the past and the other into the future. We don't know which one we are in. It helps avoid the singularity at t = 0. Other singularities such as black holes are also eliminated. And he doesn't need dark matter or a multiverse in which the anthropic principle can get us to where we are now. That is, it makes predictions which could be falsified or verified if LISA becomes operational and we can view gravitational waves from the origin of the universe.

But it is only a model. Its main goal is to fit the observations of acceleration in the galaxies that cannot be explained by Einstein's general relativity without assuming there is more matter in the universe than we can observe. It is useful or not if it can make accurate predictions.

----------


## desiresjab

By narrowing our look at Quadratic Reciprocity to twin primes only, we are able to initially highlight those two instances that interest us most, that is, where the larger twin is a quadratic residue of the smaller (and therefore the smaller is a residue of the larger, as well). We want to know what to expect of a set of twins at a mere glance.

Only 8n+1 and 8n+7 primes have 2 (the difference of any two twins) as a quadratic residue. In the case of 19 and 17, 19 is an 8n+3 prime which is a quadratic residue of 17, the 8n+1 prime. In the case of 73 and 71, 73 is a 9n+1 prime and 71 is an 8n+7 primes, whose difference is 2. Those should be the only two types of cases where the larger twin is a quadratic residue of the smaller. 

Quadratic residues of 17 are *1, 4, 9, 16, 8, 2, 15, 13*. Another way of expressing this group is: *-9, -4, -2, -1, 1, 2, 4, 9*, 

The latter expression illustrates how the quadratic residues are grouped symmetrically around zero. Of course, we could always substitute 19 in for 2, to make it even more clear that 19 is a quadratic residue of 17. We can easily see, that is, when 6x6 is divided by 17, it leaves a remainder of 2, making 2, and thereby 19, a quadratic residue of 17.

Now let's make a list of the quadratic residues of 19.

*1, 4, 9, 16, 6, 17, 11, 7, 5*, It also looks like this:

*-14 -13 -8, -3, -2, 1, 4, 9, 16*. 

The groupings are asymmetrical, for 19 is an 8n+3 prime, and of course therefore a 4n+3 prime.

We see that 17 is a quadratic residue of 19, as well. What we also see is that in either of these cases (an 8n+3 and an 8n+1, or a 9n+1 and an 8n+7), the two primes will be quadratice residues of each other. We further can note that in the case of the Legendre symbols for these two numbers, they will always be positive, so one is always multiplying two positive Legendre symbols together.

Of course the above cannot be the case in general for twin primes, but only for the two cases we looked at.

What happens for other twin prime combinations? Well, what has to happen? First of all, we think we can guarantee that all other combinations of twins will generate two negative Legendre symbols to multiply together to acheive positive 1. All we have to do is try a few.

11 and 13 are 8n+3 and 8n+5. Quadratic residues of 11 are:

1, 4, 9, 5, 3. An identical expression is: -8, -6, 1, 4, 9.

Quadratic residues of 13 are:

1, 4, 9, 3, 12, 10. An identical expression is: -10, -3, -1, 1, 4, 9.

Notice that neither group contains its twin in its residue set. Both Legendres will be negative, producing a positive upon multiplication.

The remaining case is 8n+5 with 8n+7. The twins 29 and 31 will fit this bill.

The residues of 29 are:

1, 4, 9, 16, 25, 7, 20, 6, 23, 13, 5, 28, 24, 22.

The residues of 31 are:

1, 4, 9, 16, 25, 5, 18, 2, 19, 7, 28, 20, 14, 10, 8. 

Notice that once again, neither is in the other's quadratic residue set. The Legendres will be negative by themselves, producing a positive product.

The only case we did not explore was 9n+1 and 8n+7. Unfortunately the the smallest pair of twins with this form are 71 and 73, for which I do not care to calculate the sets. But I can guarantee they are quadratic residues of one another.

It appears that if primes, and in particular twin primes, are equally distributed by type, then half the pairs will be residues of one another and half will not be. In any case, for twin primes the combined Legendre symbols will always create positive reciprocity, whether it attains it through (1)(1) or (-1)(-1).

I conjecture for the moment that the latter [(-1)(-1)] occurs when only three factors of 2 are involved between the minus ones of the two twins.

This conjecture feels hopeful. If it were true, it would enable us to immediately "see" the characters of the separate Legendres involved. Everything seems pinned on the 8n+5 number, since it plays a part in both cases. If its minus one contains more than two factors of 2, I am saying it will never cut Eisenstein's lower triangle into odd halves.

Perhaps this is obvious, but I have to find a way to prove it or demonsrtate its truth or falsity clearly. The method would be to find any 8n+5 prime involved in a twinship, whose minus1 has more than two factors of two yet still divides Eisenstein's triangle into odd halves. For instance, 39+41 instead of 40+40, for eighty lattice points et al. This seems like an interesting question to pursue. We need to make a list of 8n+5 primes to see if any are both super even (more than two factors of 2) and involved in a twinship. It does not matter if our 8n+5 is the larger or the smaller of the twins.

5, 13, 29, 37, 53, 61.....

(101, 103)

It turns out that most of these primes are involved in a twinship. This is no way to proceed.

Wait. I have seen it. When 4 is added to any 8n number, it reduces its evenness by one factor of 2, the case with all 8n+5-1 numbers. Thank God for Gauss. This is exactly the condition we need for the conjecture to be true, and we see that the conjecture is indeed true.

A diagonal through WAXY in Eisensteins' rectangle in Wikipedia will always cut WAXY into two equal but odd numbers of lattice points whenever the diagram is for twin primes either of which is an 8n+5 number. The exact principles used here apply anytime one looks at any pair of primes of opposite type. This concludes the investigation. 

* * * * *

It is now clear that 8n+5 numbers will always have two and only two factors of 2. This becomes clear when we tie these numbers to the ruler sequence, which expresses the degree of evenness of each consecutive even number, in other words, the number of factors of 2 it contains.

1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 5, 1, 2, 1, 3, 1, 2, 1, 4...

Now I understand what Gauss was doing when he proceeded on from 4n numbers to investigate 8n numbers, and why.

At this time I am prepared to guarantee how a diagonal will apportion the lattice points of WAXY when the diagram is for twin primes, according to the types of primes involved. I am not quite prepared to guarantee how the diagonal will apportion lattice points for any two primes at all, based simply on their ruler function positions. But Halelujah anyway! I have learned much this time.

What I need to know now is this: When he have two super even (more than two factors of 2 apiece) 4n+1-1 numbers, I believe they will not always apportion the lattice points into two even sets just because they are super even. Only Eisenstein diagrams for rectangles with very low eccentricity (like twin primes) can guarantee what the cut will be. Eccentric rectangles may not always produce the result of two equal sets of lattice points. Which has me wondering if hugely eccentric rectangles can _ever_ produce equal sets. Somehow, I suppose they can. But I really have no clue whether they can ever produce two unequal sets with an odd number of elements.

It would sure be nice if the ruler function ruled the whole law. For all I know, it does. I certainly hope it does. What that would mean is this: We could determine the cut, and thereby the Legendre symbols for any two primes at a glance. Let us pray the ruler function rules, which I doubt.

----------


## desiresjab

A look back into my own papers reveals the answer quickly. Both (5, 13) and (5, 17) arrive at their positive 1 through a multiplication of (-1)(-1), yet their minus ones all are highly even and 16 is super even, suggesting that past a certain level eccentricity plays a greater part than evennes in deciding the cut. This is now clear. As p/q varies from 1/1, a square, the distance of p/q from 1 is its eccentricity. I believe this ratio of p/q is the key factor in bringing the rest of the law under the reign of understanding. I can imagine a graph, two graphs intersecting, one for evenness, one for eccentricity. They have to cross somewhere. That is where one takes over dominance from the other.

----------


## desiresjab

To put it in an even smaller nutshell: The minus ones of the four types of 8n+z primes all have their own evenness which is perfectly predictable. Only the minus ones of 8n+1 type primes are ever higher than two factors of 2. Just as the ruler function shows, all the action that changes is in the 8n slots, making 8n+1 primes the only primes whose minus one can be super loaded with factors of 2. How super evenness and eccentricity of the Eisenstein rectangle interact, is now the question.

----------


## desiresjab

I will now make a conjecture which I myself may be able to quickly dash.

Any two 4n+1 primes far enough out the number line, one of which is an 8n+1 prime, will make the Eisenstein cut into two identical and even sets. Even separated by four instead of two, if they are far enough out the number line the eccentricity of their rectangle will approach zero, and should force the apportioning of the diagonal into two even and equal sets. It only takes one counter example to dash the conjecture to smithereens.

Must two highly even primes far out the number line really always be quadratic residues of one another merely because they are relatively close together? Hmmm. This sounds very suspicious. But we shall see.

----------


## desiresjab

Confirmed already! How? Because it was trivial after all. If one 4n+1 prime overlaps another by 4, four is a square, so they will obviously both have to be residues of the other, since one of them is at a glance known to be.

----------


## desiresjab

By the way, a previous post did not post, and the previous conjecture was easily solved, since the two primes in question overlapped by 4, a square number.

Which leads us to the next question, actually observation. Yeah, it is like a took a math pill tonight.

Suppose we have two super even 8n+1 primes. One is relatively close to zero, such as 17, for instance; the other is tremendously far out the number line. As long as the distance between them, their difference, the overlap number, is an obvious square number that we can see, we are guaranteed positive reciprocity positively gained, that is, by multiplying (1)(1) instead of (-1)(-1).

So in the case of this hugely eccentric rectangle we are guaranteed positive reciprocity positively gained. But do we know if the quadrant rectangle WAXY in Eisenstein's diagram on Wikipeja will be divided by the diagonal into two equal sets? Unfortunately, we do not. We only can guarantee that both sets will contain a positive number of lattice points, not that they will be equal. We cannot even say the difference between lattice points of the two sets cannot exceed two. We believe this is the case, but cannot prove or demonstrate it to our own satisfaction yet. We is me, apparently.

----------


## YesNo

I am glad to see you back doing number theory. I am off and on thinking of the Sierpinski sequences and whether I can form coverings of them. I have been thinking about using Python to generate a cover, but I keep getting distracted.

----------


## desiresjab

> I am glad to see you back doing number theory. I am off and on thinking of the Sierpinski sequences and whether I can form coverings of them. I have been thinking about using Python to generate a cover, but I keep getting distracted.


That gasquet is often used to illustrate similarity across scale in books on fractal geometry. I have seen it, and about all I know is that it has fractal properties.

It is good to be back on QR, especially since I am making progress. My current attempt is to find something in the behavior of highly even numbers that distinguishes them. Only 8n type numbers are super even (more than two factors of 2).

I am beginning to suspect there might be no defining behavior that sets them apart other than what I have already stated about the role of "degree of evenness", otherwise I would already have found it in the literature. This likely why 4n types are the only ones used in the formal definitions.

Something I consider quite important that I learned last night from graphing is that the difference between lattice point sets can exceed 2. For the primes (5, 41), one set has 16 points and the other 24. Now we know any number can be this difference, depending only on the eccentricity of the rectangle. For me this is a huge breakthrough.

The thing about 8n numbers being the only ones with super evenness I should have realized long ago. I have been in possession of the ruler function for about a year and only just now have put the thoughts together.

----------


## desiresjab

The question might be asked, "_Why explore this in a cosmology thread_?" Cosmology is what I think it is. I am trying to look into the mind of the creator, to quote an idea found in Peter Martinson's paper on QR. Every time one understands something about math they previously did not, it amounts to looking into the creator's mind and methods. The deeper the proposition, the deeper one must look into the creator to understand it

----------


## desiresjab

By the way, if anyone looks at the Martinson paper be aware that there are two mistakes in it. Specifically, it once lists 2 as a quadratic residue of 19, and it once lists 67 as a 4n+1 number. These mistakes can stall an amateur, as they did me.

----------


## Danik 2016

I don´t understand anything about number theory, but I´m glad that this scientific thread is alive again.

----------


## Dreamwoven

I am, too, Danik!

----------


## desiresjab

I have a dear friend who regularly likes to blast science and technology, and even math. Sometimes I fight back, but more often I let it go. It is useless to argue with anyone about what something is. But I find that most people who rail against science have misconceptions about what it is and what it is supposed to do. In short, one might call it the art of numerical observation. Poets and novelists are keen observers, too, but not normally numeric observers. Scientific observation is tied to numerics because of the world around us--the world and things in it quantify naturally, once humanity taught itself the knack. Objects fall the same speed every time, so through repeated experiment men were able to quantify that speed and finally find a formula for it.

Some people, honestly, expect way too much of mathematics and science, but when asked for suggestions they come up with the same old criticisms. Do they expect all scientists to drop what they are doing and go look for a ghost or proof that aliens built the earth's ancient pyramids?

The legitimate mathematicians plug along, as they always have, noting patterns in innocent numbers. Brains like Fermat, Euler, Gauss, Eisenstein and Reimann and many others, had built up quite a cache of these number patterns in three hundred years. Lo and behold, almost every one of them has a reflection in nature or a direct expression taken from a pine cone, a seashell or a flower stalk, or at least has a very strong application. It is a fact that much of the action in our everyday world of man and nature can be compressed into a simple forumla, a number pattern. These patterns were there, someone had to find them, someone had to eventually realize their applicability to some corner of our universe. It is a cause not for blame but celebration.

Will numbers prove as useful in the study of so called spiritual phenomena, dark matter, time travel, astral travel, dream awareness, consciousness itself? Will it be able to handle what physicists dig up? I suspect it will be useful for the things it is now useful for and some of what science uncovers. We may discover another tool. Math is very fatalistic. Math is a grand tautology.

I feel the biggest laws of the universe are yet undiscovered, even barely suspected. I think this has to be the case when our observations are only impeccable concerning 4% of the stuff in the universe, yet almost totally ignorant of the other 94%. The fruit does not hang so low anymore.

How gigantic was it when Newton discovered the laws of 4% of our stuff, and then later when Einstein replaced the model? An actual scientific breakthrough in any of the fields mentioned above would be huge. I expect something odd when someone finally lays a finger on these mysteries. We may find what is holding together those galactic clusters which are moving too fast is a form of consciousness. Sometime in the future the discovery of a consciousness particle would not shock me. I expect the strange out of the universe.

----------


## YesNo

> I feel the biggest laws of the universe are yet undiscovered, even barely suspected. I think this has to be the case when our observations are only impeccable concerning 4% of the stuff in the universe, yet almost totally ignorant of the other 94%. The fruit does not hang so low anymore.


The problem with that 94% of the supposed missing stuff is that it may not be there. All that we may need to do is reformulate the mathematical gravitation theory and do away with the need to find dark matter. And since we haven't found any, so far, maybe it doesn't exist at all. 

Einstein did something like this in the early 20th century. At that time astronomers were looking for a planet they called Vulcan near Mercury that should exist if Newton's laws were correct which would explain the orbit of Mercury. Einstein's modification of Newton's gravitational theory made the search for Vulcan unnecessary. 

I'm getting that account of Vulcan from John Moffat's "Reinventing Gravity". He has a new theory of gravity that should make dark matter and black holes unnecessary. Of course, if someone finds dark matter that would shoot down his new theory.

----------


## desiresjab

> The problem with that 94% of the supposed missing stuff is that it may not be there. All that we may need to do is reformulate the mathematical gravitation theory and do away with the need to find dark matter. And since we haven't found any, so far, maybe it doesn't exist at all. 
> 
> Einstein did something like this in the early 20th century. At that time astronomers were looking for a planet they called Vulcan near Mercury that should exist if Newton's laws were correct which would explain the orbit of Mercury. Einstein's modification of Newton's gravitational theory made the search for Vulcan unnecessary. 
> 
> I'm getting that account of Vulcan from John Moffat's "Reinventing Gravity". He has a new theory of gravity that should make dark matter and black holes unnecessary. Of course, if someone finds dark matter that would shoot down his new theory.


Reformulate? Hmmm. Not so sure about that. Whatever dark matter & dark energy turn out to be, they represent new phenomnena. I believe theories are reformulated when they are pretty close but off. Any theory of gravity does not even get us close to understanding the phenomena we are observing. But, yes, it could even turn out you are right. I am doubtful we will do away with theories of gravity altogether, and since it would need serious modification to fit today's observations, what we will have around is a modified one. Of course it could also turn out that our threries of gravity are essentially correct, that DM and DE are a new type of phenomena requiring a new structure piled on top of our theories of gravity.

----------


## desiresjab

Maybe a reformulated theory of gravity would occur if we discovered new features of gravity that could account for the phenomena. There may be types of neighborhoods where gravity behaves differently. There are not supposed to be, the way the theory is formulated, but the universe is full of surprises and I believe it will continue to be. Maybe there is more than one type of gravity--a Higgs Boson stock split of sorts.

----------


## YesNo

> Reformulate? Hmmm. Not so sure about that. Whatever dark matter & dark energy turn out to be, they represent new phenomnena. I believe theories are reformulated when they are pretty close but off. Any theory of gravity does not even get us close to understanding the phenomena we are observing. But, yes, it could even turn out you are right. I am doubtful we will do away with theories of gravity altogether, and since it would need serious modification to fit today's observations, what we will have around is a modified one. Of course it could also turn out that our threries of gravity are essentially correct, that DM and DE are a new type of phenomena requiring a new structure piled on top of our theories of gravity.


I think we need to keep some distance from media reports about what is or is not real in the universe when it comes to dark matter, dark energy, black holes or a singularity at the big bang.

Regarding the current need for dark stuff, the following seems to be true: the current evidence from viewing the rotation of galaxies has falsified Einstein's theory of gravity in a way so big that measurement inaccuracies do not account for the discrepancy in the prediction and the observations.

There are two ways around the problem and, from what I understand from reading Moffat's book, many people are pursuing both approaches:

1) Einstein's theory is correct. That means there exists dark stuff, but we cannot detect it. As people look they eliminate possible candidates for this dark stuff and these negative results are valuable.

2) Einstein's theory is not correct. We need a new theory of gravity. However, that theory of gravity is not easy to come by. Moffat mentioned some of the notable failures. He does think his version is sound and fits the observations.

----------


## desiresjab

You cannot be a phyicist without a thorough knowledge of calculus. What you have to know if you are a number theorist is modular arithemetic. Many people do not know what that is. If they look it up, they are told it is "clock arithemetic," and so it is, as far as that goes. If I say to you, "It is twenty-five o'clock," you will easily figure out it is one o'clock.

Mathematicians call it conguence theory. That is what Gauss named it. The notation looks like this 16≡4 (mod 12). Translated into English that means 4 is the remainder when 16 is divided by 12. Just as in normal arithemetic, this is equivalent to 16-4≡0 mod(12). However, we could not say (16/4)≡1 (Mod12), as in normal arithemetic, since the remainder when 4 is divided by 12 is 4. Twelve is called the modulus, because Gauss knew Latin.

Usually, the modulus is a prime number, but it does not have to be. There are a few more traps to watch out for and exceptions to know when dealing with composites, the theory a little more extended. We will keep it prime.

To show its usefulness, let us consider an easy problem.

1 Use mod notation find the last digit of 340. (Hint): In other words, the remainder when 340 is divided by 10.

To solve this with mod notation we first have to know a simple law: 

If ar≡b (mod m), then ars≡bs. We merely need to factorize the exponent and use this law.

34≡1 (mod 10). Then 34(10)≡110 (mod 10).

The answer is one

****

Let's look at one slightly harder.

2 What is the last digit in 720?

74≡1 (mod 10). Therefore 74(5)≡15 (mod 10).

The answer again is one.

* * * * *

Maybe someone can solve this next one.

3 Find the last digit in 79?

----------


## desiresjab

The calculational difficulties grow fast with only a little increase in the base. This number is probably too large to find the last digit on your calculator. 

What is the last digit in 1920? 

Factor the way easiest for calculation.

192≡1 (mod 10), 192(10)≡1 (mod 10)

The answer is one.

----------


## YesNo

> Maybe someone can solve this next one.
> 
> 3 Find the last digit in 79?


79 (mod 10) = 73(3) (mod 10).

Using Google sheets, 73 = 7*7*7 = 343 and 343 (mod 10) = 3.

Using your example,

73(3) (mod 10) = 33 (mod 10) = 27 (mod 10) = 7

----------


## desiresjab

> 79 (mod 10) = 73(3) (mod 10).
> 
> Using Google sheets, 73 = 7*7*7 = 343 and 343 (mod 10) = 3.
> 
> Using your example,
> 
> 73(3) (mod 10) = 33 (mod 10) = 27 (mod 10) = 7


Good deal, old boy, you are right there with me. That one may have been trickier than the first several examples. This type of manipulation is fundamental to doing number theory. Many other fundamental ideas and techniques are indispensable. Every one that is learned and studied adds a speck to one's basic understanding of our number system and numbers in general.

Many people either forgot or never quite realized that something as routine as $867 is nothing more than an algebraic equation with the unknowns filled in as 10.

867=8(10)2+6(10)1+7(10)0

----------


## desiresjab

Banned again for what I am not doing. I don't get it. And I have forgotten how to extract myself from this mess.

----------


## desiresjab

Hmmmm....Which math notation did I use that it did not like? Could it be as simple as parentheses where it did not understand them?

----------


## desiresjab

Will it take a letter to a numerical exponent?

a6

----------


## desiresjab

Will it take aa and (a)a and a(a)?

----------


## desiresjab

Okay, I am stumped.

----------


## desiresjab

I will submit this post piece by piece until I find what is preventing it from passing. It will be done when I put QED at the bottom.

I may as well keep going with simple but intriguing ideas found within everday numbers. The next technique is one I would not expect anyone without experience in number theory to find, so I will go ahead and show the techniqe in a solution first, then explain its general workings. I know I can count on Yes/No, I hope others will follow the reasoning, as well, since it seems to be the way God thinks about some things, to mix metaphor and mathematics.

Prove that for every prime number p, there is a multiple of p whose every digit is 9. To me, at least, this law is not intuitively apparent without a studied understanding of numbers first. I suppose a Newton or a Gauss might look at it for the first time and see it immediately, but not most of us. Perhaps I am elevating my own dimness by suggesting it would take a Gauss or Newton. I still think it would require a massively bright person to work this out on their own without former experience in related problems. 

A concrete example first, using 7 as our prime. We notice that N, 1/7=.142857... This is part of the technique, in case we are not Newton or Gauss and did not think of this approach. The three dots signify that the numbers after the decimal point comprise the repetend, and keep repeating.

When we multiply by(10)6, we get the digits of the repetend out front of the decimal point, but the repetend behind the decimal is unchaged, like this:

N=.142857.

(.142857...)(10)6=142857.142857...=1000,000N.

We know this is a hundred thousand of this repetend. If we subtract one of these repetend, like so:

(142857.142857...)-.142857...=142857=999,999N.

In other words, 999999/142857=7 and of course, 7·142857=999999, showing that seven has such a multiple.

Okay, everything got through except the general description at the last. I will try to fix that and post it next.

QED

----------


## desiresjab

In general: N ·10R where R is the length of the repetend of the prime p and N the repetend itself, the above technique must yield a similar result of a string of 9's of length R.

R will always be a divisor of p-1, or it will be p-1 itself.

There was a proof of the general case to go along with this, but since I have screwed up my own home document trying to fix this problem, I will now have to fix that.

That's all. Hopefully, it will be understood without the proof of the general case.

----------


## desiresjab

Here is the rest of it, not so much a proof as a clarification.

N10R-N=[9(10)R+9(10)R-1...+9(10)0]. This is indeed a string of 9's. Now,


Z=999999N, then Z/999999=N=1/p. Therefore p=999999/Z, and p·Z=999999.

----------


## YesNo

> In other words, 999999/142857=7 and of course, 7·142857=999999, showing that seven has such a multiple.


Nice result.

I've seen the technique used as a way of showing that a repeating decimal can be represented as a ratio of integers. This shows that such numbers are "rational". However, I did not realize that this implied that any prime has a multiple where the product is all nines. Now that you point it out, it makes sense that this should be the case.

----------


## desiresjab

I will give the forum three days to work on a problem that took me longer than that to solve. If more time is requested, I will give it gladly.

* * * * *

If I summed all the digits of the decimal representation of 44444444, and called that number C, then summed all the digits of that number, what would the resulting number D be, exactly.

It looks like this: AA=B, the sum of the digits of B=C, and the sum of the digits of C=D. Find D.

Can anyone do this problem? Give it a try.

----------


## Dreamwoven

> I will give the forum three days to work on a problem that took me longer than that to solve. If more time is requested, I will give it gladly.
> 
> * * * * *
> 
> If I summed all the digits of the decimal representation of 44444444, and called that number C, then summed all the digits of that number, what would the resulting number D be, exactly.
> 
> It looks like this: AA=B, the sum of the digits of B=C, and the sum of the digits of C=D. Find D.
> 
> Can anyone do this problem? Give it a try.


I wouldn't be able to get beyond the neat way you present the problem, desiresjab.

----------


## desiresjab

> I wouldn't be able to get beyond the neat way you present the problem, desiresjab.


Well, this problem is no slouch. I believe I found it in an old math Olympiad exam. It was a bonus question for the brightest teenagers on earth. That is kinda disgusting, isn't it? Although I have had possession of the problem for years, I did not look at it much until I began to collect some tools. Then it took a long time of laying it down for long spells and forgetting all about it. A problem I want to solve but have no way to approach can drive me crazy if it stays in my mind. I just loved the shape of this problem and what it was asking for looked so impossible.

What solving stuff like this comes down to is having the tools. There are always properties, laws and techniques which are related somehow and can be used to excavate the answer. If you have no idea about the existence of some property which will lead you right to the answer you are in for a long haul, probably an impossible one. Only cats like Leibniz could solve such a monster unprepared, independently rediscovering such laws and properties as are needed along the way.

I will show how this one is done soon. Right now I have personal obligations begging to be resolved around the house.

----------


## desiresjab

Sum the digits of the decimal representation of 44444444, then sum the digits of the new number. The form of the problem looks like this:

AA=B, the sum of B's digits equals C, the sum of C's digits equals D. Find D, exactly.

To find the number of digits in the decimal representation of 44444444, in other words how long the number is, the algebraic technique is to take the log of and add 1, while chopping whatever remains behind the decimal, leaving us with a whole number:

AA, like so: 

↓{1+log AA}↓=↓{1+A log A}↓=16211, in the case of 

44444444.

The actual calculations are as follows:

4444(log 4444)=16210.707879.... After we chop the decimal, which is irrational and goes on forever without pattern, and add 1 we know the huge number AA has 16211 digits. This number is far larger than the number of atomic particles in the universe, which is only in the neighborhood of 10120 particles, tops.

How can we sum the digits of B when we do not even know the number? We detour creatively, by supposing every one of those 16211 digits to be a 9, so that when we sum them they will equal 9(16211)=145899.

145899, then, is the upper limit for C, it can be no larger. But we are allowed to pretend again. We pretend that all six digits of C are equal to 9. 

6X9=54. Ah, do you know what that is? It is an upper limit for D, the number we are after. 

Suddenly, we are somewhere. One can almost smell solutions, but how do we get there? The answer lies again in observation and technique, in that order.

When any huge numbers are multipled, it is always easy for us to ascertain what the last digit is. In the case of 4444 times itself, no matter how many times successively we perform the operation, the last digit is always 4 or 6. Even powers of 4444 end in 6, and odd powers end in 4. Of course 4444 as an exponent is an even power, so 44444444 ends in a 6, and when divided by 10 would leave a remainder of 6.

We must detour now for some pretty facts, lest we arrive at our final destination with our route still shrouded in mystery.


* * * * *

Any time you sum the digits of a number X, that sum S will remain congruent to X (mod 9) through successive summing operations. This only happens with 9 because we use base 10. In base 8, 7 would have this same property through successive summing operations. Any base. (The above preservation property is also true for 3 in base 10).

Notice that once we ascertained the number of digits of 44444444, summing the digits successively is the only operation we have performed.

Successive powers under a modulus (any particular divisor ) always bend back and reapeat themselves in a cycle. This is called a power residue cycle, a cyclotomic number, to throw in a fancy term dangerously. They are certainly cyclical, but I do not know if that makes them cyclotomic. The host confesses.

A Modulus does not allow any number in its system to be as large as itself. The modulus is king. Larger numbers are bent back by division until only a remainder is left. *Any* whole number, when divided by 10, for instance, will leave one of ten remainders: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. That's it. The modulus can systematically reduce any giant number to one smaller than itself. And as far as the king is concerned, any larger number than itself which it can divide is equal to zero. 30=0 (mod 10).

Let's look at the _power residue set_ of 4444, (mod 9). It is okay to begin with the zero power of 4444, which we know is always 1 for any number, and 1 will always be the first number in the power resudue set of any number.

44440=1 (mod 9)

44441=16=7 (mod 9)

44442=19749136=40=4 (mod 9)

44443=87765160384=55=1 (mod 9)
...
...
...



44444444=1 (mod 3)

9(493)=4437, means 4444=7 (mod 9) 

44444444=7 (mod 9)

At its 4444th power, 4444 is congruent to 7 (mod 9) and 1 (mod 3).

In fact, the power residue set for this number (mod 3) is

{1, 1, 1, 1, 1....}, and on any power our huge exponential number only equals 1 (mod 3).

The pattern listed above vertically for (mod 9) power residues has produced 1 again at power 3, so we know the complete cycle, which goes 1, 7, 4, repeating every three powers until the highest power is reached. Also notice that to take the congruency of a number with respect to 9 in our base 10 system, we only need to sum its digits then take the remainder when divided by 9.

The more convenient way for calculation is to begin the pattern on the first power so that the power residue set cycles like this from power 1:

{7, 4, 1, 7, 4, 1...} 

* * * * *

Gathering what we know so far:

Our target number is less than or equal to 54 and is congruent to 7 (mod 9).

That becomes a small set. 

{7, 16, 25, 34, 43, 52}

We have gone from fifty-four possiblities to six.

Now, as was the case with 9, the value, the magnitude, of a number is the same as the sum of its digits (mod 3), so we are further able to say that the correct answer must be congruent to 1 (mod 3). Only three numbers now qualify.

{7, 16, 34}

Now the judging becomes more demanding, more appropriately the search for a tool or a property to distinguish Miss America out of the three.

We know that B ends in a 6. In symbols B=6 (mod 10). But how does this help us? 10 cannot play the same trick that 3 and 9 did, for the preservation of its congruency does not happen. (It would happen on 10 only for base 11, where the factor 5 would also exhibit the property). Ah, but maybe 6 can play the trick. Actually, it cannot help us that way, either.

The simple observation that works is that any number which leaves a remainder of 1 when when divided by 3 and a remainder of 7 when divided by 9, has to leave a reaminder of 4 when divided by 6. But the English of that is so long and messy. Check this out.

If D=1 (mod 3) and= 7 (mod 9), then D=4 (mod 6).

Only one number from the last set qualifies:

*{16} is the answer. D=16*

One can only assume several of the interior digital places of C are 0 and 1.

Since 16 is smaller than expected, let me recheck the steps. The mistake, if any, is probably in following that (7, 4, 1) cycle correctly to the last power.

I may be back.

I think I can proclaim it correct, since I validated my stepping on the power residue set of 7, 4, 1. At the 4443rd power the cycle is on 1, which puts it on 7 for the 4444th power.

Since we know C is less than 145899 but still a six digit number, I envision a number built along the following lines:

100456. It makes perfect sense now.

----------


## desiresjab

I had to scratch the next post out and replace it with this. Everything is all right. I know the answer is correct.

----------


## YesNo

To see if I understand, you have done the following:

1) Calculated the number of digits in 44444444 using a logarithm. You found there to be 16211 digits.

2) Found an upper bound on the sum of those digits by assuming they are all 9's, the highest value each digit could be. The max sum is 145899 which would be the maximum value of C. We know there are 6 digits in that sum.

3) We want the sum of the digits of C and so can find an upper bound on this as done in (2) by setting all the digits to 9. That max sum is 54.

4) You note that 44444444 (mod 9) = 7 and the sum of the digits of that number is also 7 (mod 9). This reduces the possibilities for D to a number in this set: {7, 16, 25, 34, 43, 52}

5) You note that 44444444 (mod 3) = 1. This reduces the possible value of D to one of {7, 16, 34}.

6) I am still trying to think through the last part, but I will leave this till later.

----------


## desiresjab

> To see if I understand, you have done the following:
> 
> 1) Calculated the number of digits in 44444444 using a logarithm. You found there to be 16211 digits.
> 
> 2) Found an upper bound on the sum of those digits by assuming they are all 9's, the highest value each digit could be. The max sum is 145899 which would be the maximum value of C. We know there are 6 digits in that sum.
> 
> 3) We want the sum of the digits of C and so can find an upper bound on this as done in (2) by setting all the digits to 9. That max sum is 54.
> 
> 4) You note that 44444444 (mod 9) = 7 and the sum of the digits of that number is also 7 (mod 9). This reduces the possibilities for D to a number in this set: {7, 16, 25, 34, 43, 52}
> ...



Your confusion at step six may be because indeed I have overlooked something. 16 and 34 both meet all three qualifications, i.e. =7 (mod 9), =1 (mod 3), and =4 (mod 6).

So really, our set of prospects still has two members left:

{16, 34}.

Now I have to think of something else.

----------


## desiresjab

Ha! ha! The laugh is on me. But I think I have the fix through a different method on step six of Yes/No's listing of my steps.

The idea is to go to base 11 and cast out 10's. We do not actually need to go to base 11, for that would involve adding precisely one new symbol (usually designated simply as _a_), so let's just say we did and not, because we know 44444444 is the same value whether it is expressed in base 10 or base 11, and so would also yield the same value when 10's were cast out, whether that identical value used identical digits in the same order in the two systems or not. This way we will know how the sum of our number (mod 10) behaves and we are enabled to preserve our value through successive operations of summing the digits. These operations can be imaginary too, since they preserve through. When we cast out 10's, just as we cast out 9's in base 10, we are left with 6, which has identical real value in base 10 or base 11. 

By this reasoning, this time I believe infallible, *16* is still the correct answer

----------


## desiresjab

A Chinese parcel delivery worker just made a significant discovery concerning Carmichael numbers. Those are an important class of composite number that is hard to factor and often fools primality tests for Fermat primes. One of the beauties of number theory is that discoveries like this one are often made without the use of advanced tools.

----------


## YesNo

The casting out 10's seems promising. I haven't tried it but I expect one should be able to distinguish which of those three numbers is the correct answer.

Do you have a link to the information about the discovery concerning Carmichael numbers? I was in Madison, Wisconsin, yesterday and stopped by some of the many used book stores in that university town and found Carmichael's "The Theory of Numbers" and "Diophantine Analysis". I was planning on going through that book making a Jupyter notebook out of some of the problems I found interesting.

----------


## desiresjab

> The casting out 10's seems promising. I haven't tried it but I expect one should be able to distinguish which of those three numbers is the correct answer.
> 
> Do you have a link to the information about the discovery concerning Carmichael numbers? I was in Madison, Wisconsin, yesterday and stopped by some of the many used book stores in that university town and found Carmichael's "The Theory of Numbers" and "Diophantine Analysis". I was planning on going through that book making a Jupyter notebook out of some of the problems I found interesting.


The links I found were distinctly uninteresting. They said less than I have. A Chinese friend told me about it. "Chinese amateur Carnichael numbers", worked as a search phrase.

I believe the casting out of 10's does work. I am very happy with it because I have never seen that used before. It will be embarrassing if my logic is wrong.

The book I am after is called 52! I had a book going on a similar subject when I heard about this one being published. I want to see if he did a better job than I think I could have done. I still may finish mine, if I do not think too much of his effort. I could order it online, but I do not make any financial transactions over the internet. If you find this one score it for me and I will pay cost plus shipping it to me.

----------


## YesNo

I searched for "52!" but I only came up with links to "52 factorial". Do you know the author or something more about the book? 

I'm not familiar with casting out 10's. If I get some time I will see if the idea would work. However, you might want to put in the details of the proof.

It doesn't hurt to publish your own book even if you liked the other one. However, it is good to have a list of references and that book may be one of them.

----------


## desiresjab

> I searched for "52!" but I only came up with links to "52 factorial". Do you know the author or something more about the book? 
> 
> I'm not familiar with casting out 10's. If I get some time I will see if the idea would work. However, you might want to put in the details of the proof.
> 
> It doesn't hurt to publish your own book even if you liked the other one. However, it is good to have a list of references and that book may be one of them.


Michael Wayne Cottle write 52!

I was not familiar with casting out 10's, either. It was just an idea that came to me.

----------


## YesNo

> Michael Wayne Cottle write 52!
> 
> I was not familiar with casting out 10's, either. It was just an idea that came to me.


You can find the book on Amazon. The kindle edition is 99 cents. 

He sounds like an interesting character.

Tell me more about casting out 10's. I don't really follow it.

----------


## desiresjab

Is there a certain part of the demonstartion I might explain more clearly, Yes/No?

The biggest leap of faith comes in going to base 11 so we can see what happens to the sums (mod 10), without actually ever going there. I reason that a number expressed in one base versus another does not have more factors of 10. The digit 6 is congruent to this set 

(mod 10) {6, 16, 26, 36...} and this one (mod 11) {6, 17, 28, 39...}

It seems to me that for a number A to leave a remainder smaller than itself equal to 6 in (mod 11), that remainder can only be 6, though the original number in base 11 might have ended in a digit other than 6. In fact, the number may end with the digit 5 in base 11, like 105 does. The reasoning for theoretically going to base 11 is to preserve the nature of (mod 10) sums over the operation of summing the digits of the powered number several times, as they are for 9 in base 10. I believe we can only guarantee this with a modulus which is one less than the base.

So when I sum all these digits in (base 11), (mod 10), I should get the real last digit. I already know what B is (mod 10), but without going to base 11, I could not say anything about the last digit of D, however, because the result of the summing operation would not be preserved. The sum of a number's digit ought to stay congruent to the same set (mod 10).

I have never consulted anyone because I don't like to do that, since I am trying to solve problems myself. I know a math phd who I have played music with. At this point I would not mind giving him a call, to see if my reasoning has been correct. He will quickly see other ways to do it, so I will have to keep him on track. Even if I got the right answer, I want to be sure my reasoning was not faulty somewhere.

Because the problems I choose are hard for me, that makes them fun even when they are torturous. I always learn a lot and correct myself a lot.

----------


## YesNo

So there are three candidates to consider, {7, 16, 34}, in base 10. We got that set by considering 44444444 = 7 (mod 9). Wouldn't we have to rewrite 44444444 in base 11 and find out what the digit was (probably not 7) modulo ten in base 11?

My suspicion is that converting to a new base might not help with the solution, but I don't know.

----------


## desiresjab

It is easy to verify that B ends in 6, base 10. Simply by multiplying single 4's together and watching the last digit. It only alternates between 4 on odd powers and 6 on even powers. If one doubted that technique, one could go here http://www.javascripter.net/math/cal...calculator.htm and calculate the number directly and observe all 16211 digits--a number I was relieved to see matched my own.

Then the other day I went over to another specialized calculator that was supposed to calculate the sums of digits. I pasted in the digits of our AA=↓B↓=C and it gave me back a measley sum of seventy-six thousand and something. This was only a five digit number, and so I felt it had to be wrong. I wrote the publishers of the calculator and told them so. Now I begin to waffle on my own judgement there.

C represented a _maximum_ value of the sum of B's digits. Maybe it was six digits long in the case of that maximum, but only five digits in reality. Then I would have gotten my maximum for D from these _over estimates_, which is perfectly fine, since all I was looking for at that point was a maximum.

* * * * *

I was so convinced the sums of digits calculator was wrong that I did not even bother to add up the number it gave as C. Had I done this, I would have noticed that the sum of the digits of 72,601=16, like I did just minutes ago. It was seventy-two thousand and something, not 76 and something. I did not notice the details. All I thought I saw was the wrong answer for C sitting there.

This means I must have arrived at the right answer despite some faulty conclusions. I was protected by the part of my reasoning which was correct--that D was a maximum of 54 and conguent to 7 (mod 9) and 1 (mod 3). It did not matter at all that I thought C was a six digit number instead of a five digit one. It did not interfere with getting the answer.

It seems the answer is settled, then, by brute force and somewhat inadvertantly. It also seems highly unlikely that I would have come up with this particular answer through erroneous reasoning. All my work pointed to this answer.

My base 11 speculations are open for discussion. They twist the brain like a sponge. 

* * * * *

I don't know if I mentioned this, but since 4447 is a prime, I thought there might be some means of working backwards through Fermat's little theorem to 4444. It was a fruitless avenue, but I may have only taken a wrong turn.

* * * * *

The key piece of reasoning is in realizing that the congruence class of the sums of digits is preserved mod 9 across multiple operations. The other was understanding power residue sets and their behavior.

I am satisfied with the discussion down to the set {7, 16, 34}. I had already eliminated 7, but I cannot remember how right at this moment, so it can stay in the discussion.

----------


## desiresjab

My speculation about base 11 was correct!

The letter A in the following has nothing to do with the letter A in AA, but is the letter of the alphabet used to represent 10 in base 11.

The base 11 digits of a number X will always add up (mod 10) to something with the same congruency (mod 10) as the decimal representation of X.

Base
10----11

96----88
196--169
296--24A


Whatever those digits base 11 add up to, we know they will be congruent (mod 10) to whatever the base 10 representation is congruent to. And we further know that the congruence class is preserved across the operation of summing the digits of the base 11 representation. This is how we know with certainty that the last digit of D is 6 in decimal representation, just as it is the last digit of B. I believe that is all of it.

----------


## desiresjab

The former topic feels so wrapped up that I feel like another problem to untangle how God thinks is due. I believe the following was from a prep test for math Olympiad.

Prove that (2m)! (3n)! is always an integer.
..................(m!)2 (n!)3



 (2m)! (3n)!  The bottom factors easily, not the top.
m! m! n! n! n!



 m! 2mΠm+1 n! 3nΠn+1  after cancellation, this becomes:
m! m! n! n! n! 



  2mΠm+1 3nΠn+1  
m! n! n!



2(m) 2(m-1)...(m+1) 3(n) 3(n-1)....2(n) 2(n-1)...(n+1)
...m...(m-1)...(1)--------(n)....(n-1)......(n)...(n-1)...(1)


All the terms above are factors in both numerator and denominator. I had to use to dots and dashes to make them line up properly for illustrative purposes.

We see that any prime up to M in the denominator will have a double in the numerator, and will so be cancelled. Any prime up to N in the denominator has both a double and a triple in the set of numbers n+1 through 3n, and so both powers of N! are cancelled, leaving a whole number in the numerator and 1 in the denominator for both M and N.

This was the first time I ever figured out how to factor a factorial notatively.

----------


## YesNo

The factorial problem seems to work as you described.

I also find it hard to trust the results of calculators especially when the numbers get large. Many things can go wrong: implementation, computer hardware, programming. I am currently using Python for other purposes and it looks like this should work well with number theory. You can get Python by installing the anaconda distribution at https://www.continuum.io/downloads. The software is free. 

This should do multi-precision arithmetic. I also interface it with jupyter notebooks which comes with the anaconda distribution.

I checked the answer with Python to your previous problem and I also got 16 as you did. Here are copies of the results:

num = 4444**4444
total = sum( [ int(char) for char in str(num) ] )
72601
sum_of_total = sum( [ int(char) for char in str(total) ] )
16

I didn't know how to code the sum of digits and so I searched for a solution and used this one, that is, I didn't come up with it on my own: https://www.codecademy.com/en/forum_...453d00020186c8

----------


## desiresjab

Yes, sir, I believe we have the 4444 problem settled. The best part of it was what was learned along the way. After finishing a problem the next next problem is to find the next problem. One that is solvable is needed, but which will require considerable effort. One could work on the Goldbach conjecture, the twin prime conjecture or Brocard's problem, but would never have a reasonable chance of even making progress. But if one chooses an easier problem, it could lead to discoveries which might be of assistance on those unsolved questions later on. 

I may throw in another Olympiad problem to hold us until a truly fascinating problem comes along.

----------


## desiresjab

The following is a type of problem I find extraordinarily difficult. Other people may see the answer fairly quickly. But I look at this thing and I am baffled where to even start. I have seen problems of this type which are even more brutal. I am sure there are number theoretic techniques to solve them, for I found this problem again in a prep test for math Olympiad. You see, I have a functional problem as a mthematician and a human being--if a technique looks ugly and cumbersome, I avoid it. I seem to be seeking the beautiful in mathematics. I post the following problem because it is so opposite to that, to me. It is quite brutal from my perspective. And since I have no techniques to solve it, it is a head-on brain against problem sort of deal. There is probably also a solution out of formal logic. Here the beast is:

Every man in a village knows instantly when another's wife is unfaithful, but never when his
own is. Each man is completely intelligent and knows that ev-
ery other man is. The law of the village demands that when
a man can PROVE that his wife has been unfaithful, he must
shoot her before sundown the same day. Every man is com-
pletely law-abiding. One day the mayor announces that there
is at least one unfaithful wife in the village. The mayor always tells the truth, and every man believes him. If in fact there are exactly forty unfaithful wives in the village (but that fact is not known to the men,) what will happen after the mayor's announcement?

----------


## desiresjab

Let this one stay up a for a while. I know some smart people would like to think about it. I may know the answer, but I am not exactly sure, either.

----------


## desiresjab

I think the question is rather poorly formed. It leaves a certain taste of ambiguity, especially what is in parentheses. That is why I am going to attempt to answer it already, and get it out of here. Then I will move on to a really really difficult one of this type which is well formed.

One of the villagers is probably the local mathematician. He asks each man in the village, including the mayor and himself, to count the number of wives they recall to have been unfaithful. They must only write down their own name and that number on their piece of bark. Then he collects each piece of bark and spreads them out for all to see. All forty men who recall only thirty-nine adulterous wives, must shoot their own before sundown. Even the mayor and the mathematician may end up shooting their wives. This works even if there are only forty-one men in the entire village.

Assuming that to be correct, it was not really that hard, I guess. Let us move on to a real monster, which I do not expect to be able solve at all.

----------


## desiresjab

Try this one on for size, folks. It is a real baffler. Yet it does have number information which can obviously be used to solve it. I have worked on this one before, and I see from my notes that my work ended in confusion and uncertainty. I will give it another shot, after trying to determine what I was up to before. Sometimes it is quite excruciating to reconstruct your own logic from forgotten work, especially if the logic happened to be wrong! I am on the line now. But to tell you the truth, I have no confidence at all on this one.



Two positive integers are chosen. The sum is revealed to logician A, and the sum of the
squares is revealed to logician B. Both A and B are given this information and the information
contained in this sentence. The conversation between A and B goes as follows: B starts
B: ` I can't tell what they are.'
A: ` I can't tell what they are.'
B: ` I can't tell what they are.'
A: ` I can't tell what they are.'
B: ` I can't tell what they are.'
A: ` I can't tell what they are.'
B: ` Now I can tell what they are.'

(a) What are the two numbers?

----------


## desiresjab

I know that thing is possible. It just hurts my head like h3ll, though.

----------


## YesNo

If all the men in the village are completely law-abiding, who is having sex with the unfaithful wives?

I might be missing something about the second problem. We know the sum of the two positive integers, call it S. Consider all the possibilities as A runs from 1 through S - 1 of the pairs A and S - A. Square each of these, A2 and (S - A)2, and see if the sum of those two squares equals the other known value. When it does then one has the two numbers.

----------


## desiresjab

I would approach this through two possible connections--the generalization of Fermat's theorem on the sums of two squares, in conjunction with the Pythagorean theorem. Only one might be necessary, or maybe neither. It could be wrong altogether, but this is where I would begin to sniff. It is necessary to know exactly what both professors learn from each answer of the other. One could perhaps walk upwards to the correct sum of squares this way. Brutal, but it could work. I am thinking perhaps the Pythagorean theorem can provide a shorcut to the answer, once one understands what is happening with each answer the professors give.

What you have to start with for this investigation is knowledge of just which numbers can be expressed as the sum of two squares. This is not too hard to remember: Those numbers exactly which either have no prime (4n+3) factors, or all prime (4n+3) factors in the prime factorization of the number are to an even power. That is, any number, as long as it prime 4n+3 factors are all to an even power. Either that or it has no such factor at all. Only such numbers can be expressed as the sums of two squares.

----------


## YesNo

I suppose we could shorten the brute force result with some algebra. Let the two unknown numbers be X and Y. We are given the sum of those numbers, X + Y, and the sum of the squares, X2 + Y2. As an example we can say the sum of the numbers is 6 and the sum of their squares is 26. We can change these parameters later. What are the numbers X and Y?

Since X + Y = 6, we know X = 6 - Y.

We can do the following transformation X2 + Y2 = (6 - Y)2 + Y2 = 36 - 12Y + Y2 + Y2 = 36 - 12Y + 2Y2.

We are given that 36 - 12Y + 2Y2 = 26, so we can subtract both sides of the equation by 26 and get the following quadratic equation: 10 - 12Y + 2Y2 = 2(Y - 1)(Y - 5) = 0. There will be two solutions to this equation. If they are integers then we have the solutions we want. We can see that Y could be either 1 or 5. X would be the opposite.

----------


## desiresjab

> I suppose we could shorten the brute force result with some algebra. Let the two unknown numbers be X and Y. We are given the sum of those numbers, X + Y, and the sum of the squares, X2 + Y2. As an example we can say the sum of the numbers is 6 and the sum of their squares is 26. We can change these parameters later. What are the numbers X and Y?
> 
> Since X + Y = 6, we know X = 6 - Y.
> 
> We can do the following transformation X2 + Y2 = (6 - Y)2 + Y2 = 36 - 12Y + Y2 + Y2 = 36 - 12Y + 2Y2.
> 
> We are given that 36 - 12Y + 2Y2 = 26, so we can subtract both sides of the equation by 26 and get the following quadratic equation: 10 - 12Y + 2Y2 = 2(Y - 1)(Y - 5) = 0. There will be two solutions to this equation. If they are integers then we have the solutions we want. We can see that Y could be either 1 or 5. X would be the opposite.


Your algebra is beautifully done and succinct. It tells us if we plug in the right number we will get back the number we should. It just does not tell us if we have selected the right input number in the first place to make the two professors volley back and forth for six separate "_I don't knows_" before professor B has enough information to answer the question.

I learned a lot from looking at the following sequences. It is a list of the numbers which can be represented as the sums of two squares. Zero counts In symbols: 62+02 is how one gets 36 as the sum of two squares, etc. The bottom rows are the sums of squares. The numbers above them are the number of representations for that number (how many different ways it can be represented as the sum of two squares.) 

1, 1, 1, 1, 1, 1,, 1,, 1,,, 1,,, 1,,, 1,,, 1,,, 2,,, 1,,, 1,,, 1,,, 1,,, 1,,, 1,,, 1,,, 1,,, 1
1, 2, 4, 5, 8, 9, 10, 13, 16, 17, 18, 20, 25, 26, 29, 32, 34, 36, 37, 40, 41, 45, 


1,,, 2,,,, 1,,, 1,,,1,,,, 1,,, 1,,, 2,,, 1,,, 1,,, 1,,, 1,,, 1,,, 1,,, 1,,, 2,,, 1,,, 1,,,1,,, 2 
49, 50, 52, 53, 58, 61, 64, 65, 68, 72, 73, 74, 80, 81, 82, 85, 89, 90, 97, 100


Something pops out immediatey: all the numbers representable in more than one way are divisible by 5.

If the sum of B's squares were anything less than 25, he would immediately know the decomposition, for there is only one composition of them. Both A and B realize this.

Let us assume the actual numbers are 5 and 0.

B knows A has a 3+4 or 5+0. A knows B has 13, 17 or 25.

As soon as B speaks, A knows B does not have a sum of 13, or 17, so he must have a 25. Since A's sum is 5, he knows the proper sum is 5+0.

* * * * *

Let us assume the actual numbers are 3 and 4.

B knows A has 3+4 or 5+0. A knows B has 49, 37, 29 or 25.

*B says* he doesn't know. This tells A B has 25.

*A says* he knows.

* * * * *

The same thing seems to happen at 50. The first number with enough complexity to keep the professors volleying might be 100. At least A does not know the answer when B speaks for the first time. Or perhaps it is the first number representable in three different ways, which I read was either 285 or 385, I think the latter.

The proper sum of squares must force the number of returns in the volley to seven responses total.

----------


## desiresjab

I wonder if each pair of responses eliminates one root. But that would be a cubic equation, since there are three pairs of _call and response_, and then B says he knows the answer. So far, as far as I can get is after B speaks, A also says he doesn't know, though I cannot determine what further information that gives B. That is for the sum of squares 100.

----------


## YesNo

I did not understand what the going back and forth was with A and B each saying they did not know the answer. Apparently they are not allowed to give each other the values they know. Each of them knows a limit on the possible answers without knowing each other's information.

----------


## desiresjab

My computer keeps freezing up. I lost a long post, but the solution was not in it. As soon as B speaks, A can eliminate all numbers but 50 and 100. But to B the set of possibilities in A's mind could be 50, 100, or 169. For, yes, 169 has two decompositions: 132+02, and 122+52. 

And we see a doubly representable sum of squares does not have to be divisible by 5 after all. We have two intersecting sets sets {50, 100} and (100, 169), for 100 can be made from a sum of 10 or 14. Only A knows he is holding a sum X+Y=10, for instance, and not 14.

I will keep the posts relatively short to avoid freeze ups. To be continued as an editorial extension...right here

* * * *

----------


## desiresjab

The problem might be that the two sets {50, 100} and (100, 169) intersect. I have no idea whether non intersecting sets of doubly representable sums of squares are even possible. Their intersection causes problems for me. If non intersecting sets are possible, at this point I would have to guess the problem could be solved only with them, for I have trouble seeing where either one gains enough additional knowledge through the response when the sets intersect.

----------


## desiresjab

Oh, boy. All our questions and more are answered in this little paper. The authors give examples of sums of squares with 1, 2, 3, 4, 6 (Can't remember if 5 was there) unique expressions, or factorizations, as these authors more properly call them.

A thorn in the path to watch for is their not counting 02 in their method of calculation. This means the standard formula (which I found elsewhere) will give different values. One only needs to subtract 1 from the total number of expressions.

The standard forumla for the number of expressions of any natural number to any power is:

rk(n)=2[d1(n)-d3(n)].

Now n is the sum of squares itself, k is the power, which may be higher than 2, and d1 and d3 signify the number of actual factors or divisors of n, prime or otherwise, which are respectively congruent to 1 or 3 (mod 4). A congruency of 2 or 0 (mod 4) is not germane, and not included in the calculations.


http://www.rowan.edu/colleges/csm/de...Submission.pdf

----------


## desiresjab

Now as far as the two professor problem goes, I have made a terrible, amateurish oversight. I always confess to my oafish ignorance and embarrassing oversights, especially the real whoppers, because they make such good copy.

Essentially, *words two and three of the problem* nail down a simple constraint I flat overlooked in my zeal to forge ahead. Our ignorance always forces learning upon us when we persist. I cannot be emabarrassed for what I have learned, but only for what I used to not know, like yesterday.

We learned a lot, and the problem will be quite different when we return to it tomorrow--with some elements ejected from the set of prospects.

----------


## YesNo

> http://www.rowan.edu/colleges/csm/de...Submission.pdf


This article made me realize that whenever one is talking about sums of squares one should think Pythagorean theorem and circles.

----------


## desiresjab

> This article made me realize that whenever one is talking about sums of squares one should think Pythagorean theorem and circles.


Quite true. Remember that is what the Martinson article did.

The two professor problem is too hard for me unless I find another line of attack. Plus, it is not a fun problem for me, but has become a minor obsession anyway. For recreation I have to do beautiful problems. At least they are beautiful to me. I wonder if anyone agrees with me over what "looks" good in mathematics. Here is one.

When is xp*q* divisible by (xp)q? Prove.

----------


## desiresjab

The last problem is very easy of course. Let pq=a, let pq=b.

From basic algebra: 

xa=xa-b.
xb

Problems like this are good for refreshing yourself on technique.

The answer is for all x except 0.

----------


## desiresjab

Here is a pretty good one. It is countable, if you figure out how to do it. (Hint): There is also a technical way to do this.

Four hundred people are standing in a circle. You tag one person, then skip k people, then tag another, skip k, and so on, continuing until you tag someone for the second time. For how many positive values of k less than 400 will every person in the circle get tagged at least once?

Have fun.

----------


## YesNo

It looks there would be many ways in which at least one person would not be tagged and that would depend on k being a divisor of 400. If k is relatively prime to 400 everyone should get tagged eventually. 

But what about those values of k that have some factors that also divide 400, but other factors that don't? For example 6? Just checking 6 and 10 that would skip some as well.

So I assume the value would be the number of values of k relatively prime to 400. That would be given by Euler's phi function. I searched for a way to calculate that value: https://www3.nd.edu/~sevens/13150unit10.pdf

However, this isn't a proof that the above is correct, just some comments suggesting that it should be.

----------


## desiresjab

Yes. Euler's phi function is the right function. It will read off the answer directly. The actual work would look like this.

Φ(400)=400Πp|n(1-1/2)(1-1/5)=400(4/10)=160.

Obviously, divisors of 400 cannot do the job. But every k relatively prime to 400 will circle the table leaving a different remainder (mod Φ(400)) at the end of each revolution.

----------


## YesNo

I was thinking more about gravity lately. 

It occurred to me that finding something in the universe with a dark concentration of gravity larger than three solar masses would trigger the black hole portion of Einstein's gravitational theory. If one finds something like that then the theory says all the matter in that region collapses to a point of no radius. There is no counter source of energy, such as fusion in a star, able to keep the radius larger than zero. It vanishes into a singularity _of the theory_, a place that effectively looks like the theory is dividing by zero.

What can one conclude if one finds such concentrations of dark gravity? I think the most reasonable thing to conclude is that the theory is wrong. Just because the theory has a singularity doesn't mean that reality has singularities. I also suspect that a theory with singularities has got problems from within. Those singularities imply that the theory can reach a point where it stops serving as an explanation.

Now the theory also has problems explaining the rotation of galaxies. They move too fast to stay together without more matter (which is also dark). But one hasn't, so far, been able to find that dark matter. It looks like the theory fits the universe of the early 20th century well. That universe didn't have galaxies, dark gravitational sources (aka black holes) nor was it believed to be expanding.

----------


## Danik 2016

I can´t follow your complex calculations but I wonder if this article or the cosmology journal as such might be interesting for this thread:
http://journalofcosmology.com/JOC26/...26CONTENTS.htm

----------


## desiresjab

> I can´t follow your complex calculations but I wonder if this article or the cosmology journal as such might be interesting for this thread:
> http://journalofcosmology.com/JOC26/...26CONTENTS.htm


Yes/No and I will send our observations right over to those people.

----------


## YesNo

> I can´t follow your complex calculations but I wonder if this article or the cosmology journal as such might be interesting for this thread:
> http://journalofcosmology.com/JOC26/...26CONTENTS.htm


I do find the idea that "black holes" are more like MECOs (magnetospheric eternally collapsing objects) to be interesting. https://en.wikipedia.org/wiki/Magnet...lapsing_object

MECOs solve one problem with black holes: there is no mass for a zero radius object since the mass has been radiated away throughout eternity. A black hole has a large amount of mass but its radius is zero. It also resolves the problem of a black hole collapsing faster than the speed of light to that point. I liked how such a massive point (black hole) was called a "mathematical myth".

Supposedly there is an effort to build an earth based set of radio telescopes on each side of the globe, called an Event Horizon Telescope, making the resolution power of such telescopes larger that what currently exists (https://en.wikipedia.org/wiki/Event_Horizon_Telescope) They would be able to point to the radio source at the center of our galaxy known as Sagittarius A*. Some believe this to be a black hole, but it failed to "eat" a cloud of matter passing by it a year ago as predicted, if it were the kind of black hole they imagined it to be.

John Moffat's theory does away with black holes entirely, but I don't know how this is done. Besides black holes, there is also the anomaly (based on Einstein's theory) of a too rapid rotation of galaxies as well as the singularity (like a black hole) at the beginning of the universe. Moffat's theory does not merge gravity with the other three forces found at the atomic level (electromagnetic, strong and weak nuclear forces). However, that might be a plus for his theory.

----------


## Danik 2016

> Yes/No and I will send our observations right over to those people.


Seems a good idea!

----------


## Danik 2016

> I do find the idea that "black holes" are more like MECOs (magnetospheric eternally collapsing objects) to be interesting. https://en.wikipedia.org/wiki/Magnet...lapsing_object
> 
> MECOs solve one problem with black holes: there is no mass for a zero radius object since the mass has been radiated away throughout eternity. A black hole has a large amount of mass but its radius is zero. It also resolves the problem of a black hole collapsing faster than the speed of light to that point. I liked how such a massive point (black hole) was called a "mathematical myth".
> 
> Supposedly there is an effort to build an earth based set of radio telescopes on each side of the globe, called an Event Horizon Telescope, making the resolution power of such telescopes larger that what currently exists (https://en.wikipedia.org/wiki/Event_Horizon_Telescope) They would be able to point to the radio source at the center of our galaxy known as Sagittarius A*. Some believe this to be a black hole, but it failed to "eat" a cloud of matter passing by it a year ago as predicted, if it were the kind of black hole they imagined it to be.
> 
> John Moffat's theory does away with black holes entirely, but I don't know how this is done. Besides black holes, there is also the anomaly (based on Einstein's theory) of a too rapid rotation of galaxies as well as the singularity (like a black hole) at the beginning of the universe. Moffat's theory does not merge gravity with the other three forces found at the atomic level (electromagnetic, strong and weak nuclear forces). However, that might be a plus for his theory.


What fascinates me is that theories today have to change very quickly to keep pace with the high amount of new discoveries.

----------


## desiresjab

Prove that for n ε N, (n!)! is divisible by n!(n-1)!.

Here is a compact little gem. Divisibility is a fundamental concept in number theory, so it is always good to practice. At the moment, I do not see an answer, but there should be a fairly simple way of showing that the denominator will always be smaller than or equal to the denominator and is a whole number, which is what this problem asks for in plain English. Probably a factorization which allows something to be cancelled on the bottom. It has to be put into such a form first.

----------


## desiresjab

All the fancy math in the world is only fancy accounting. The speed of a falling object towards earth owes x to mass, t to time spent falling and r to air resistance; current owes so much to voltage and so much to resistance, and they have a relationship which can be stated in symbols:

I=E/R. Current equals voltage divided by resistance. Accounting.

Sometimes events in the universe are so fancy that it takes highly fancy accounting to account for them. Sometimes specific new accounting tools have to be developed to account for something, but it is always just accounting. Look how fancy and specific some of the tools of mathematicians are. They could not be used to do your bookkeeping. They do the bookkeeping for various aspects of the behavior of objects in the universe which you cannot use everyday arithmetic for.

The awesome thing is, sometimes the accounting tools are discovered and developed before the phenomena themselves are observed!

As for equations, the same old usual suspects apply across every field. An extraordinary number of phenomena around us are described by ordinary first degree equations. The equations that account for other phenomena are more complex and of a higher degree. Quadratic equations bit off another big hunk of what is understandable and can be accounted for.

Even the eight-miles-to-a-side matrices used in the attempts by physicists to simulate or reproduce consciousness, are but attempts to account for consciousness the way -16t2 accounts for the position of a falling object.

----------


## desiresjab

When Newton fully generalized the binomial theorem for the first time, what was he up to? This provides one of the best examples of the accounting tool being developed before the phenomenon it accounts for is even suspected, in this case _phenomena_ many times over. The bionomial theorem is one of the most universally applicable from biology to backgammon to sociology. To Newton it was an exercise in arithmetic. He could have had no idea of the landslide of applications to come, even if he felt certain there would be some applications somewhere or th'other.

I am always comforted that at least a few of our brightest are doing pure mathematics. We are not the first. We could ask ourselves a big question. How did the ancients do it?

How did east Injun mathematicians before the time of Christ come up with at least a way to calculate individual binomial coefficients? We cannot be sure they did not already have the general method Newton developed. An even greater question is: Why? Why would they do it? It is almost certain they had no application. In fact, it looks like humans had to develop the ideas behind the binomial theorem at least three times in our history, forgetting it twice, apparently because it had no use.

The same thing that drove Newton must have driven those second century B.C. east Injuns and twelfth century Moslems who were onto the theorem before. It was about the behavior of numbers. I think they were simply playing at a form of play that requires a lot of concentrated mental focus.

Now, if IQ measures anything it measures the ability to do this kind of abstract accounting. It seems unlikely to me that ancient east Injuns would have measured out at an average (over the entire population) higher than modern Injuns on IQ tests, an average we know is lower than the west, lower than China and lower than Ashkenazi Jews, yet they did get this job done, i.e. the binomial theroem, the coefficients of its expansion.

This tells me human populations with an average as low as 85 will get the job done. They will occasionally produce geniuses enough standard deviations to the right to produce mathematical acheivement. This lower average was perhaps unable produce enough geniuses to sustain mathematics through startup technologies it could be the accounting firm for.

However, we should not kid ourselves that the average European slopping through mud on his way to mass in the fourteenth century would have an IQ of 100 by modern standards, if we had a time machine and could test them. In point of fact, this person is likely right on par with a real dummy these days. 

But this European had one mighty advantage that none before him had possessed--Gutenberg. Ideas could be disseminated as never before. Without Gutenberg mankind might have forgotten the binomial theorem a third time.

----------


## YesNo

> Prove that for n ε N, (n!)! is divisible by n!(n-1)!.


I was thinking of using a mathematical induction argument on this, but I only got as far as the base cases for n = 1, 2 and 3. However, I don't know how to get the inductive step to work algebraically. That is given that the statement is true for n, how can I algebraically manipulate the factorials so that it has to be true for n + 1?

So, I looked up the problem. Here is a combinatorial solution that I am not totally convinced of, but I assume is correct: http://math.stackexchange.com/questi...isible-by-nn-1

What I liked about that solution was the way the person who answered it rewrote the problem as a fraction and then reinterpreted it as a combinatorial problem that must have an integer as a solution. Hence the numerator is divisible by the denominator.

One might also be able to use the gamma function which would allow one to rewrite the problem as integrals. But I didn't see how that would make the inductive step any easier to calculate.

----------


## desiresjab

> I was thinking of using a mathematical induction argument on this, but I only got as far as the base cases for n = 1, 2 and 3. However, I don't know how to get the inductive step to work algebraically. That is given that the statement is true for n, how can I algebraically manipulate the factorials so that it has to be true for n + 1?
> 
> So, I looked up the problem. Here is a combinatorial solution that I am not totally convinced of, but I assume is correct: http://math.stackexchange.com/questi...isible-by-nn-1
> 
> What I liked about that solution was the way the person who answered it rewrote the problem as a fraction and then reinterpreted it as a combinatorial problem that must have an integer as a solution. Hence the numerator is divisible by the denominator.
> 
> One might also be able to use the gamma function which would allow one to rewrite the problem as integrals. But I didn't see how that would make the inductive step any easier to calculate.


I cannot ever remember using induction successfully with a problem other than homework assignments long ago. It is not one of the techniques. The notation is difficult and requires brain-wracking precision. It is not one of my techniques, which I will eventually regret.

Some of these problems come from textbooks, and the student is expected to use the same techniques studied in each section to solve the problems. You might have noticed I seldom do this. I use my own bag of preferred tricks on almost everything, picking up additional information as I go.

One thing I find torturous is long, algebraic manipulations looking for derivations. This was Euler's style. I avoid that when I can. Induction would be a good example. Several times in my solutions, however, I do point to induction as the final step of a process without actually doing it, if I am pretty sure of myself. 

* * * * *

Both are merely factorials, it should suffice to show the denominator is smaller than or equal to the numerator, since any smaller factorial divides a larger one.

n!(n-1)!=1n! 2n! 3n!...(n-1)n!

(n!)!=n!(n!)!Π(n+1) i)

 n!((n!)!Π(n+1)i) 
1n! 2n! 3n!...(n-1)n!

Obviously, a factor of n! could be canceled in this fraction. That does not yet show the top is at least equal to the bottom.

The concept is the one we used before with this notation. The numerator consists of factors which are all higher multiples of all the factors of the denominator. 

Look at the fraction again. The numerator is making itself (n!)!Π(n+1) i) copies of n!, leaving at least one copy after cancellation of every factor in the denominator, since .

Note: The subscript and superscript on the multiplication operator capitol Pi are not powers and only show where to begin and end the multiplication sequence. I am unable to put both of them on the same side of the operator, or I would.

----------


## YesNo

> Even the eight-miles-to-a-side matrices used in the attempts by physicists to simulate or *reproduce* consciousness, are but attempts to account for consciousness the way -16t2 accounts for the position of a falling object.


I was thinking about this while we were walking around Oak Park looking at early Frank Lloyd Wright houses. I don't think it is possible to reproduce consciousness with a mathematical structure. 

One might be able to simulate some aspects of consciousness or find correlates of consciousness in the brain, but at some point this fails to reproduce consciousness. The reason is because the mathematical structure, based on determinism and randomness, cannot make a choice. However, consciousness could be characterized as having an ability to make a choice, no matter how limited. That implies that the property of making a choice cannot be mapped to a deterministic-random structure.

This property of making a choice is not the "hard" problem of consciousness. That would have to do with subjective experience and "qualia". That so-called hard problem limits consciousness to sentient animals (and perhaps plants) who can be expected to have subjective experiences. 

This is a harder problem of consciousness and allows anything, including indeterministic quantum reality, to be conscious in its own way with or without qualia.

Of course the counter argument is that nothing is able to make a choice which is why the indeterminism of quantum reality is shocking. If one defines choice as being outside a deterministic/uniformly random structure, then that quantum indeterminism can be interpreted as making a choice. But we don't have to deal with reality directly at that level. We see it as larger clumps where it behaves more predictably. We can make tables, chairs and computers out of it and think it is all mathematically predictable because those tables, chairs and computers as tables, chairs and computers don't make choices. They are not conscious as such.

----------


## desiresjab

> I was thinking about this while we were walking around Oak Park looking at early Frank Lloyd Wright houses. I don't think it is possible to reproduce consciousness with a mathematical structure. 
> 
> One might be able to simulate some aspects of consciousness or find correlates of consciousness in the brain, but at some point this fails to reproduce consciousness. The reason is because the mathematical structure, based on determinism and randomness, cannot make a choice. However, consciousness could be characterized as having an ability to make a choice, no matter how limited. That implies that the property of making a choice cannot be mapped to a deterministic-random structure.
> 
> This property of making a choice is not the "hard" problem of consciousness. That would have to do with subjective experience and "qualia". That so-called hard problem limits consciousness to sentient animals (and perhaps plants) who can be expected to have subjective experiences. 
> 
> This is a harder problem of consciousness and allows anything, including indeterministic quantum reality, to be conscious in its own way with or without qualia.
> 
> Of course the counter argument is that nothing is able to make a choice which is why the indeterminism of quantum reality is shocking. If one defines choice as being outside a deterministic/uniformly random structure, then that quantum indeterminism can be interpreted as making a choice. But we don't have to deal with reality directly at that level. We see it as larger clumps where it behaves more predictably. We can make tables, chairs and computers out of it and think it is all mathematically predictable because those tables, chairs and computers as tables, chairs and computers don't make choices. They are not conscious as such.


Good post. But then, I did not say I thought it was possible or impossible, just that that is what the boys and girls with the eight-milies-on-a-side matrices were trying to do, some very fancy accounting.

Because we are unable to define consciousness adequately it would be harder to reproduce and know if we had. Movies are full of rebellious machines.

Personally, I would not be surprised if some connection between consciousness and prime numbers exists. That is why, of all the unsolved problems in elementary mathematics, I consider the Goldbach conjecture the most important, for it is unlikely that it would be proven without shedding much light on the additive properties of primes, something almost nothing is known of.

I will post a problem now my bag of tricks seems insufficient for. I will have to pick up some new tricks, unless I notice a way I have missed. The problem looks simple enough.

Let n=n1+n2...+nk where n are nonnegative integers. Prove that the quantity

 n! ...............................  Is an integer. 
n1! +n2!...nk!

----------


## desiresjab

Something is wrong! I must have stated the problem wrong. Maybe it asks for under what conditions, because 5+2=7, but 5!+2! does not divide 7!

I will have to recheck the wording of this problem, since a counter example is so available.

----------


## YesNo

I checked it doesn't work with 5, 2 and 7 using a Jupyter notebook as you noted. However, 3, 4 and 7 works. 

I think the following is true: The sum of the factorials of two consecutive positive integers divides the factorial of their sum.

It doesn't work for 0, 1 and the sum 1 so the integers have to be positive.

A more general statement might work: The sum of the factorials of consecutive positive integers divides the factorial of their sum. I don't have a proof for this. It is just a conjecture. It looks like a nice pattern.

I wonder if the following works? Does the sum of the factorials of consecutive primes divides the factorial of their sum? That would be another nice pattern. If it is true.

----------


## desiresjab

I have to find the question and read it again. I think it is related to De Polignac's formula, because that is the section I found it under. It was also a solved problem.

Those other patterns are interesting. The answer I have to give to them is: I don't know. It is the kind of stuff you might find in the sources I am consulting.

In the meantime, here is a problem that is a killer. It might not look like it, but it is.

Prove that 7 divides (22225555+55552222).

It cannot be done what seems the intuitive way of reducing everything in sight (mod 7).

That is the scary part. I always thought you could reduce with impunity any time you felt like it or it was handy. Apparently that is not the case, otherwise 34+43 would be divisible by 7, but it is not. This simple looking problem shows the traps involved with congruence theory. I wish I knew why 34+43 does not work, but the truth has not sprung on my brain yet.

I did not solve this problem, I looked at the answer. It took a while before their answer made sense, because I have a stubborn brain. I see why their answer is correct, but I do not see why the simpler version is incorrect. It is another problem in itself. I will not be able to let go of this until I understand it. I foresee torture.

I will let you look at this one for a spell while I go look up the other one.

----------


## desiresjab

I made a horrible oversight. No wonder the proposition was not true as stated. All the separate nk's sum to n all right. but then he asks if the product of those numbers (not the sum as I printed) divides n! and is an integer. It now looks easy, but the Polignac Formula and method they are using is complicated.

Now that I have righted the mistake in the question itself, I am wondering if some of my old tricks might solve this one easier than all of De Polignac's torturous Eulerian algebra. Will get back.

----------


## desiresjab

Prove that if n1+n2.... +nk=n, then

......n!.........
n1! n2!... nk! 

is an integer.

That is the problem stated correctly. I does not seem like a monster now. Maybe it is, though. Later.

----------


## YesNo

Not having the plus sign in the denominator makes this look easier. The first n1 easily cancels.

For my last conjecture about consecutive primes, a counterexample would be 5 and 7 showing the conjecture is false.

Edit: What you are asking is if the coefficients of the terms in the "multinomial theorem" are integers. Here is some discussion of it: https://en.wikipedia.org/wiki/Multinomial_theorem

They would be expected to be integers since they are involved with counting elements in sets. That would mean that the fraction above is an integer.

----------


## desiresjab

I have been away for a few days. 

What you say about the coefficients is quite likely true. While I was away I only thought about the problem in my head, no paper or computers. My goal on many such problems is to reduce the solution to a simple diagram and a little algebra. Just _how_ elementary can this grade of a problem be made to look? Look how far Eisenstein reduced QR with his simple diagram. This is a much simpler problem. The diagram will be a lot simpler. I already know the diagram. What I have left to do is the little bit of algebra. I am rather tired after my drive. It might be tomorrow.

----------


## desiresjab

x+y=n...n-y......n-x

*
*
* 
*............*
*............*..........*
*............*..........*
*............*..........*

n............x..........y


x+y has as many units left over at the top as n-x has total units. Those three large numbers as a product will be greater than what the three smaller numbers in y! produce. This proves the numerator is always greater than or equal to the denominator, a condition for an integer.

n will always have y units left over at the top after it matches every unit of x. In the case above, n has three units left over, i.e. three consecutive numbers. Being of equal length as y, that stretch of consecutive integers is forced to contain every factor of puny y factorial, always getting the first one of "1" for free. Getting that first factor of "1" for free guarantees that it will not miss any factors of y factorial.

Now, if we imagine placing y directly on top of x, we see that 5 is a multiple of 1, 6 is a multiple of 2 and 3, and 7 is a prime. 7 had to be a prime, because there were no more factors, which had just been used and could of course not be used again the very next number.

This proves they will always divide evenly.

This "proof" is intuitively strong, I believe, but of course it is algebraically weak. I could use fewer words, but I prefer to be as clear as possible for anyone making an effort to follow. A modern algebraist would not use many words at all. Wordy proofs are sort of antique in nature. But the fact is, if you can show it to yourself verbally, you will truly see it.

Suppose I had chosen 6 and 1 for the additives, instead of 4 and 3. That is the easiest case of all, because 1! affects nothing, and we can plainly see that 6! is a factor of 7!. 

I am experimentally certain one could go a step further and show that (4!)(3!) is a factor of (5!)(2!), (5!)(2!) is a factor of 6!... right on up the line as far as it goes. I can see that but I have not demonstrated it.

----------


## YesNo

I trust algebraic proofs as much as I trust the results of a computer program. There could be something wrong in either that has little to do with the original problem. Being convinced is the intuitive part of the proof. 

One way that I think about factorials is to view them as the number of ways to order n objects. In the first position one could have n choices. After that choice there are n - 1 choices for the next position and so on all the way down to the last piece and there is only one way to order it. Multiply all those together and you get n! For the multinomial formula coefficient the numerator is the way to order n objects. The denominator is the way to order subsets of those objects separately. 

I was looking at Carmichael's "Theory of Numbers" and he approaches the problem by considering the highest power of a prime p in n! which I am not familiar with but sounds interesting. Then if the highest power of a prime in the denominator is less than or equal to the highest power of the prime in the numerator the fraction would be an integer. This might be more useful than the multinomial coefficient approach since it could answer more questions.

I agree that it is good to find multiple ways to solve something. The easier, the better.

----------


## desiresjab

> I trust algebraic proofs as much as I trust the results of a computer program. There could be something wrong in either that has little to do with the original problem. Being convinced is the intuitive part of the proof. 
> 
> One way that I think about factorials is to view them as the number of ways to order n objects. In the first position one could have n choices. After that choice there are n - 1 choices for the next position and so on all the way down to the last piece and there is only one way to order it. Multiply all those together and you get n! For the multinomial formula coefficient the numerator is the way to order n objects. The denominator is the way to order subsets of those objects separately. 
> 
> I was looking at Carmichael's "Theory of Numbers" and he approaches the problem by considering the highest power of a prime p in n! which I am not familiar with but sounds interesting. Then if the highest power of a prime in the denominator is less than or equal to the highest power of the prime in the numerator the fraction would be an integer. This might be more useful than the multinomial coefficient approach since it could answer more questions.
> 
> I agree that it is good to find multiple ways to solve something. The easier, the better.


Could not agree more. I have looked at the link now and what we have here is the same recipe taken out of context. Really, every time one is working with factorial formulae it would be wise to check for this connection.

Carmichael may be using Polignac's formula. That is what it does, I believe. 

Factorials are beasts. It is easy enough to see what they are and what they designate for counting (very good description of their role in the multinomial, by the way), but when you want to compare them to something like a power it becomes sticky. One reason I appreciate Wilson's theorem so much is that it relates factorials to powers. That is cool.

It is time to move on. My own urge is toward Borcard or Goldbach. I have learned a lot from Brocard's problem. The Goldbach conjecture, however, is difficult to make even a quarter inch of progress on. I suppose a place to start would be with Ramanujan's amazing formula for summing additive partitions. No one believed such a formula was even possible.

When looking at problems which minds like his have already considered, one can only hope they were in a great hurry that day.

----------


## desiresjab

If all the mysteries of a simple number line were solved, the human race might become more advanced than the alien civilizations we envision in our fiction. Complete power over a number line would be god-like. The solving of some problems would contribute mightily to acheiving this power. The Reimann conjecture is high on the list. I think of Goldbach's conjecture as very important.

The twin prime conjecture is one that fascinates people. We know something about twin primes from studying QR on here for so long. Since twin primes are a mixed couple in terms of 4n+1 and 4n+3, their combined Legendre symbols will be (1)(-1)=-1.

Working on unsolved problems there is generally little or nothing to report. Many a doctoral dissertation in math has explored some tiny area of these problems. Any progress is counted a success.

----------


## YesNo

I didn't see a reference to Polignac, but it looks like the same argument after checking Wikipedia.

We should probably just assume the Riemann hypothesis is true. Then derive some consequences from it and check if they are true or not. Inadvertently someone might discover that it is false by finding a consequence of it that is false. 

I am still working on the Sierpinski problem. Or rather, I think about it off and on. There isn't a lot of work getting done.

----------


## desiresjab

The Reimann conjecture may be overrated in terms of what the impact of its solution would be. I do not know for sure. Researchers can already assume it is true and take it from there. That is done with a lot of propositions. I believe it is usually called the "weak form," if a proof in the chain is missing. Can anyone say what else would be immediately true if the Reimann hypothesis were true? Perhaps it is a hugely significant problem on its own.

It seems to me that a proof of Goldbach's conjecture would almost necessarily lead to a fuller understanding of the additive nature of primes. Is there even such a theory to be had? What the mathematical world needs right now is Ramanujan. If Goldbach is solvable, I think Ramanujan had the best chance. Had he lived beyond his twenties it might be solved now.

There are two biographies of mathematicians I can recommend. _The Man Who Loved Only Numbers_ and _The man Who Loved Only Infinity_, about respectively Paul Erdos and Ramanujan. They are both fantastric reads in my opinion.

I turned back to Brocard's problem and immediately made progress on an area of it I was not equipped to several years ago. I am going to hang with it again for a while and see if anything else pops out for me. Originally, I said all I wanted to do was see if I could get to the same vista where Ramanujan had stood before he gave up on this problem. Perhaps I have done that, for the present stage seems impassable. But of course no one knows where Ramanujan stood, so I must trudge onward.

----------


## YesNo

According to Wikipedia, Brocard's problem (https://en.wikipedia.org/wiki/Brocard%27s_problem) is related to the "abc conjecture": https://en.wikipedia.org/wiki/Abc_conjecture. However, I don't see the connection at the moment.

This article searching for solutions was also cited: http://www.math.uiuc.edu/~berndt/articles/galway.pdf

There is also some discussion on stack exchange: https://math.stackexchange.com/searc...rd%27s+problem

----------


## desiresjab

I can see where it might relate. Almost everything in number theory seems to relate anyway, and it is still Diophantine equations. They also use Legendre symbols from QR in the research. Since so much relates, that is what the smart guys have been doing interrupted for a couple of hundred years, which is exactly the reason and the only reason we know the importance of particular unsolved problems in immediately solving other unsloved problems. It is amazing how the brilliant boys and girls keep knocking chips off these problems until someone gets a finished sculpture. They do it with brilliance in various areas of high math. The Japanese who claims a proof of the abc conjecture invented entirely new methods, from the reports, which went far outside number theory, perhaps any theory. I still have to look at it to see if I can make monkey of anything he says.

In the meantime, I trudge along. I am within sight of a new perch from which to see the problem.

----------


## desiresjab

Most of the progress I thought I made on Brocard was illusory, further thought showed, though there was a bit of an increase in understanding. Still, I am essentially where I was last time I left on working on it after all. I feel a real shortage of tools. The problem with known tools is that everyone who is greater than me has tried them.

----------


## YesNo

There is a proof of the abc conjecture that people are trying to verify although the proof is very long: http://phys.org/news/2016-08-abc-proof.html

If that proof convinces others then understanding it may be more important than Brocard's problem. However, I am still trying to understand why the abc conjecture is relevant to Brocard's problem.

----------


## desiresjab

> There is a proof of the abc conjecture that people are trying to verify although the proof is very long: http://phys.org/news/2016-08-abc-proof.html
> 
> If that proof convinces others then understanding it may be more important than Brocard's problem. However, I am still trying to understand why the abc conjecture is relevant to Brocard's problem.


I am aware of that proof. It uses properties of number classes that are outside conventional mathematics. Elliptic curves are still generating solutions to unsolved problems. These guys are using group theory, abstract algebra, moduli and Elliptic curves and topology. His proof uses almost no calculus, so is elementary, which means anything but simple, as I have been trying to convince people for a long time.

Anyone who is not an expert in the above fields may as well forget actually understanding his proof. I know that leaves me out. Of course I will poke around with it anyway.

----------


## desiresjab

Along with all sorts of other strange mathematical objects and operations, such as rings, ideals, kernels and cosets, he uses something called _theatres_. This involves treating certain groups as if they were abstract topological fields without the labeling. Mochizuki provides this new labeling. There is a distortion during the operations of multiplication and addition in the ring that he measures and accounts for outside the group.

It does not pop out at me why the abc is linked to Brocard. But everytime I read about this stuff in depth I learn something new. For instance, I finally understand clearly what an _algebraic integer_ is. _Ideal rings_ are much clearer now too. Immediately I see a connection of theirs to Brocard. Sorry I have to be so cryptic. You see, I am slightly luny and still intend to solve this thing before the abc conjecture does it sweepingly!

It is truly amazing to me how far number investigations have been taken. Now Mochizuki has added some new beasts to the zoo.

----------


## YesNo

Based on the likelihood that the abc conjecture is true, then the number of solutions to Brocard's problem is finite. The only question remaining is to either show that all the solutions have already been found or to find another one which should have more than 20 digits. Another contribution would be to find an algorithm faster than computing quadratic residues to check that a solution does not exist.

----------


## desiresjab

> Based on the likelihood that the abc conjecture is true, then the number of solutions to Brocard's problem is finite. The only question remaining is to either show that all the solutions have already been found or to find another one which should have more than 20 digits. Another contribution would be to find an algorithm faster than computing quadratic residues to check that a solution does not exist.


I have an approach to Brocard I have not seen per se. An instinct tells me it is solveable and that I am on a good course. I chose this problem long ago because its shape was pleasing to me. I like factorials. Right now I am at an impasse, looking for a way around. One is always at an impasse on unsolved problems, then suddenly a little progress is made. Often these seeming advances are illusory, the result of a mistaken notion one realizes later. So progress really is slow, but that makes any advance exciting.

----------


## YesNo

It is always good to come up with a simpler solution than the ones known.

Regarding the abc conjecture, I can see how the form relates to Brocard's problem with is n! + 1 = m2. Here a = n!, b = 1 and c = m2. The product of all distinct primes in the product abc is close to c, that is the product of the distinct primes in this product (n!)(1)(m2) is about m2. I can see that m would be larger than n, but why wouldn't it have some primes in common with n!?

----------


## desiresjab

> It is always good to come up with a simpler solution than the ones known.
> 
> Regarding the abc conjecture, I can see how the form relates to Brocard's problem with is n! + 1 = m2. Here a = n!, b = 1 and c = m2. The product of all distinct primes in the product abc is close to c, that is the product of the distinct primes in this product (n!)(1)(m2) is about m2. I can see that m would be larger than n, but why wouldn't it have some primes in common with n!?


Because m2 being one greater than n! can share no factors with it. Euclid used the same idea in his proof of the infinitude of primes.

You set Brocard into the form correctly. I still cannot see why it would have an impact on Brocard's problem, because the task there is to show if the difference between a square and a factorial can ever be _exactly_ 1, other than the three known cases. Researchers are trying to bound the function from above.

There is a relationship between factorials and triangular numbers I find fascinating.

*(2n)!*=*2n* k=1∏n *T*2k-1.

Stunning!! What this really says is: factor out n powers of 2 from (2n)!, then multiply it by all the oddly labled triangular numbers which have been multiplied together all the way up to one less than twice the value of n, and you will be back at 2n factorial.

----------


## YesNo

Right. I can see now that m2 must be relatively prime to n! because any factor of n! that divides m2 must also divide 1.

The relationship with triangular numbers is interesting.

----------


## desiresjab

> Right. I can see now that m2 must be relatively prime to n! because any factor of n! that divides m2 must also divide 1.
> 
> The relationship with triangular numbers is interesting.


It seems a queer relationship offhand and I have to investigate it.

----------


## desiresjab

> It seems a queer relationship offhand and I have to investigate it.


I have some of it already. It is tying in with what I am doing and what I was almost able to see last time I quit the problem.

But still, why on earth is there a connection between factorials and the product of these oddly labled triangular numbers? I have not seen that part yet.

You have been so attentive and perspicuous I have to give you something. The hypothetical very large factorial in Brocard's problem has to be equal to 8 times some single triangular number. That fact should not have eluded me for so long, for I have long had everything I needed to realize it, but I am rather slow at this business. I miss things a math teacher would see routinely.

Can you figure out why the factorial must be equal to 8 times some single triangular number for there to be another pair of Brown numbers? I think you can. I believe it is within your range, from what you have shown me.

----------


## YesNo

This might work as a way to show that a solution of Brocard's problem is 8 times a triangular number.

Let n and m be integers such that n! + 1 = m2. Then n! = m2 - 1 = (m - 1)(m + 1). 

Brocard's problem does not work for n = 1 and so n > 1. Notice that m must be odd because otherwise for n > 1, n! and m have a common factor. That means m - 1 is even. Let 2r = m - 1. Then 2r + 2 = m + 1. 

Then (m - 1)(m + 1) = 2r (2r + 2) = 4(r (r + 1)). Multiply this by 2/2 = 1 to get 8(r(r + 1)/2). 

A triangular number, T, is the sum of all positive integers less than or equal to that number. That sum can be written algebraically as T(T + 1)/2. This shows that r is the desired triangular number since it has the algebraic form of a triangular number.

----------


## desiresjab

You took your own way, which is to be expected from different minds. I arrived at this idea from another factorization of the factorial. 

*Observe*: Incidentally (or perhaps not), both 120 and 720 are factorials *and* triangular numbers. Triangular numbers can be square numbers sometimes, as well. But they can never occur together because a factorial can never be a square, which has an easy proof but is intuitively clear to anyone who thinks about it hard enough. I have been around these exact issues for a while now and I have to an extent humanized them into normal language as far as I have understood them. Some algebra is necessary, as this is not the 12th century. 

Now to the heart of it.

8[(n)(n+1)]/2= 4[n(n+1)]= 

4n2+4n=*2n(2n+2)*.

*Consider*: Every whole number is a square root. Every odd integral square root lies between two numbers of the form 2n and (2n+2) for any n.

When two numbers on either side of a number x are multipled together the product is always one less than the square of x. (x-1)(x+1)=x2-1.


*The Secret*: It does not have to be proven that such and such a square can or cannot exist. The kernel of the problem is whether a very large factorial can ever be factored precisely into the form 2n(2n+2). If it cannot be, then such a breed of square can never repeat itself beyond the three known examples.

The simple power of this factoring approach may be missed. Our factorial is huge because n itself is very large, so factorialized n is really large. 

Because they are only two apart on the number line, this makes 2n and 2n+2 next door neighbors in the ordered set of even numbers. It also makes one _highly even_ and the other _barely even_. These two factors that are expected to produce a factorial, can share only a single factor, i.e. one factor of 2. They are not far enough apart to share anything else.

*Can two large next door neighbors in the world of evenness ever contain between them precisely and only all the factors of a factorial?* Can two such neighbors exist? If a factorial cannot be factored this way it cannot meet the qualifications of Brocard's problem. 

There is quite a bit more. We cannot determine which of the factors is highly even and which is barely even. I call them super factors and SF for short.

* * * *

The possibilities for the last two digits of the SFs are listed below vertically. The middle number might be paired with either factor in its group. Remember, the SFs are very large numbers themselves and these are only the last two digits of the possible SFs. Observe that the digits of the long factors in each group are identical copies of one another except for their last two digits, the one's and ten's positions. The same is true for the SFs in group B.

A
xpnuh000...02
xpnuh000...00
xpnuh000...98

B
jbvfg...52
jbvfg...50
jbvfg...48

In the case of B all the factors of 5 are with the middle factor, but it has only one factor of 2 to make zeros with, and both its possible mates 52 and 48 are highly even without any 5.

In the case of A all the factors of 5 are with the abundance of 2's in the middle SF, and the traditional tail of zeros is observed. The highly even number is in the middle this time and both its possible mates 98 and 02 are barely even.

We can also observe now that our hypothetical integral square root of (n!+1) which lies exactly between and contiguous to the SFs, must have last digits of 49, 51, 99 or 01. 

It is hard to say which of the numerous collected facts and observations might next help to further understanding. The proof does not have to be about squares at all. It could be about triangles.

----------


## YesNo

That is an interesting way of looking at the problem, considering one factor being highly even and the other barely even. Together those two factors should equal n! which has zeros in the units and tens positions after n is greater than or equal to 10 so a factor of 100 is in n!. I can see how one of those two numbers should have many zeros in them. I suspect there should be n/5 number of zeros in n!.

There are trivial cases where a factorial could be a square, such as 0! and 1!, but as soon as one gets larger than 1 there are primes involved. The largest prime less than or equal to n would not have another prime like it in n! to pair up with to make a square out of that number.

You mentioned the factorials that are also triangular numbers like 120 and 720. I wonder when would there be an r such that for some n, n! = r(r+1)/2. That would be like saying the product of positive integers less than n is equal to the sum of positive integers less than r.

----------


## YesNo

While bicycling in the neighborhood, it occurred to me that you could do something similar to the highly even and barely even with the Brown numbers. As you mentioned, assuming n! = (m - 1)(m + 1) for n > 1, then both m - 1 and m + 1 are even, but since they have a difference of 2, one of them has only one factor of 2 and the other has the rest of the factors of 2 that are in n!. Now take any other prime p > 2 in n!. All powers of that prime will be in either m - 1 or m + 1. That prime cannot be shared across those two factors since they have a difference of 2.

For the three known solutions, 

4! = (4) (6) = (22) ((2)(3))
5! = (10) (12) = ((2)(5)) ((22)(3)
7! = (70) (72) = ((2)(5)(7)) ( (23)(32))

One way to show that the solutions are finite is to try to see if this additional constraint forces there to be no solutions after a certain point.

----------


## desiresjab

> That is an interesting way of looking at the problem, considering one factor being highly even and the other barely even. Together those two factors should equal n! which has zeros in the units and tens positions after n is greater than or equal to 10 so a factor of 100 is in n!. I can see how one of those two numbers should have many zeros in them. I suspect there should be n/5 number of zeros in n!.


On the right track, but you have to remember to include those factors in the numbers less than or equal to 100 which can be divided by 5 more than once, 25, 50, 75, 100. There is a formula for this using sigma and the floor function.




> There are trivial cases where a factorial could be a square, such as 0! and 1!, but as soon as one gets larger than 1 there are primes involved. The largest prime less than or equal to n would not have another prime like it in n! to pair up with to make a square out of that number.


This is precisely the reason




> You mentioned the factorials that are also triangular numbers like 120 and 720. I wonder when would there be an r such that for some n, n! = r(r+1)/2. That would be like saying the product of positive integers less than n is equal to the sum of positive integers less than r.


I made a computational mistake regarding 720. It is not a triangular number. In fact, it is conjectured there are no more triangualr factorials after 1, 6 and 120, but this conjecture remains unproven and in about the same state as Brocard, generating lots of phd dissertations in mathematics as brains gnaw away on various corners of it.

Apparently, the question of _are there anymore triangular factorials_ is related to a more general and deeper problem which, if solved, would suddenly lead to the solution of this problem and perhaps Brocard's too. That is, a general solution to this:

n!=a!b!c!...

Some examples are known, and brains are gnawing away.

* * * * *

Triangulars were likely a dead end for Brocard's problem, unless one could show Brocard's factorial has to be triangular to fulfill the problem. Then there would be a connection.

What is still of major interest is the connection between triangulars and factorials in this identity:

(2n)!=2n k=1∏n *T*2k-1.

Major interest. What does it say? I think it says that every even factorial is the product of some powers of 2 and some odd labled triangular numbers, which is just a jewel.

(2n)! can be manipulated as follows. 

Let me get back. I have to recheck my notes for errors...

----------


## desiresjab

*(A.)* (2n)!=2n k=1∏n *T*2k-1.

(2n)!=2(n)· (2n-1)· 2(n-1)·(2n-3)·2(n-2)...·(2n-2n+1)

Note that 2 can be factored out of every other term. Since there are 2n terms there are n terms from which 2 can be factored. We now withdraw n factors of 2 and place them in front of cap pi, just as in the identity *(A.)* above, leaving:

(2n)!= 2n[n·(2n-1)·(n-1)·(2n-3)·(n-2)...·(2n-2n+1)]=

Looking now exclusively at the factors above in brackets, all we have to do is rearrange the terms to see what is going on.

[(2n-1)·(2n-3)...·(2n-2n+1)...·(n)·(n-1)·(n-2)...·(n-n+1)]

Simply amazing. What we now have is this:

*(2n)!=2n[(n!)·(2n-1)!!].
*
Caution: Double factorials are not the same as nested factorials. The double factorial means to multiply together all the odd numbers from a certain point down to 1. 

I am not yet seeing in the above bold forumla the connection to triangular numbers I was hoping would jump out at me, but the double factorial coming into play is a bit fascinating, as now the factorial and the double factorial multiplied together by definition must have the same prime factorization as a string of odd labled triangular numbers multiplied together, since the two products are the same. This fact needs to be drawn out of the algebra, if I can find the manipulations. I need the manipulations that break what is inside the brackets down to a visible (recognizable) triangular connection that we know is there, again, by definition, from the identity itself.

*Note:* It is always possible my manipulations, though correct, went the wrong way to uncover the triangular connection, leaving us with a curiosity that is merely interesting.

----------


## YesNo

This is the first I've heard of double factorials, but they make sense: https://en.wikipedia.org/wiki/Double_factorial

----------


## desiresjab

> This is the first I've heard of double factorials, but they make sense: https://en.wikipedia.org/wiki/Double_factorial


I knew about double factorials, but this is the first time I ever ran into one out in the wild. I had been wondering if they were mere toys or relevant to anything. I should have known. In math everything is always relevant on some level because everything is connected. The road that leads ever on is sometimes hard to find.

All these new connections tell me something else--I am nowhere near the last view of Ramnujan on Brocard's problem. Wherever he stopped and gave up, I have not arrived to yet, if ever I will. To do so was a stated goal when I started the problem. 

* * * * *

A friend of mine wonders if the world did not in the long run lose out by Ramanujan's journey to England. I had always assumed that he picked up enough mathematical formality from Hardy and associates at Cambridge to untangle his genius from unnecessary quests and his few incorrect notions to make it a great blessing that he undertook his voyage. There was great value to us living now in Ramanujan's being shepherded toward some of the most important mathematcial problems of his time and of all time and brought right up to specs on the frontiers of research by several of the world's preeminent number theorists who recognized the Injun's awesome abstract power.

The greatest mathematicians always best everyone of their time, doing here and there what was impossible for other great talents. They come up with formulas long thought to be impossible, solve a problem from antiquity or invent new tools. Most of Ramanujan's tools were a personal thing he could not understand himself, so his forte was producing amazing formulas that also looked amazing. When you see them, you know you were not made to go there. You will consider yourself fortunate if years of study garners a half decent understanding of just a few of the identities he pulled from nowhere, directed, he said, by a household goddess.

Ramanujan's formulas are too long and difficult to try to set up on the house word processor. People will have to take a look for themselves. There was a shorter one that was amazing, but I cannot find it.

I guess it is still an open question whether we got lucky or unlucky by Ramanujan's trip to England.

----------


## desiresjab

I have a small improvement to announce. Improvement sounds much better than correction. In the formula 

*(2n)!=2n[(n!)·(2n-1)!!d, n+(1 or 2)]*,

We had to add that subscript of "d" at the end because the double factorial is not a complete one, I realized after something kept nagging me. Going downwards, it begins on (2n-1) and descends to the next odd number after n. So it begins on n+1 or n+2, depending on whether n is even or odd. A full written expression of the function would have to account for this with something like the piecemeal notation of the Legendre symbol using the oversized bracket. That is hard on the house word processor. So is defining the range of cap pi since the superscript and subscript cannot be put up properly at the same time. One or the other, but not both.

On the function above I could have used a "u" subscript for upward. Either way you write it down, the double factorial will start and end in the same place, just the notation encodes it differently.

What can be done with the beast above, I do not know. I cannot see that triangular relationship the way we have it expressed so far.

----------


## YesNo

That goddess, Namajiri, was a form of Lakshmi. https://en.wikipedia.org/wiki/Namagiri_Thayar 

In the good old days, poets would credit muses with what they produced and I think they meant it. Ramanujan still meant it.

----------


## YesNo

Here is something on triangular factorials that gives an argument that only 1 and 3 such: http://math.stackexchange.com/questi...angular-number

----------


## desiresjab

In their example _the_ triangle has to equal_ the_ factorial, which is very easy to prove. The actual problem we are looking at and the more interesting question is whether _any_ factorials equal _any_ triangle, beyond the three known solutions.

It may be just eerie coincidence, but there are only three known solutions of Brocard too. With all the connections between mathematical objects, it makes one wonder. Brocard's three solutions lead back to three different numbers, though, than the three solutions to the factorial/triangular question. Whew!

That gets me to wondering further if there are numerous such unsolved problems in which there are exactly three known easy solutions and the existence of more can neither be proved or disproved plus computer computations have shown there are no more solutions of the function out to a vast input? In some famous problems does it stop at four solutions? How about five solutions? How are small numbers (or any numbers) distributed over a large number of unsolved problems of this type?

Wouldn't it be nice if all you had to do was plug in the number of known solutions, then plug in the extent to which you had searched for more solutions, and as output you would receive back an answer as to whether any more solutions existed? Someday I believe there might actually be a formula that works on similar principles that can decide which unsolved problems should not be worked on any longer. I know this goes against Godel. But Godel himself went against what were considered unassailable notions.

----------


## YesNo

I see what you mean by the factorial/triangular problem. I was assuming the n used in the factorial had to be the n used in the triangular number. But they could be different.

If Brocard's problem and the factorial/triangular problem are related there should be some underlying explanation for the relationship. However, the existence of only three solutions suggests that maybe there is such a relationship yet to be discovered. 

There were three features about mathematics that Hilbert wanted to show: (1) completeness, (2) consistency and (3) decidabliity. Godel showed that (1) and (2) cannot be achieved, but Church and Turing showed that (3) was not attainable either. https://en.wikipedia.org/wiki/Entscheidungsproblem However, the idea of giving a computer examples of input and correct output and then asking it to create a model based on that training data and then use that to predict what the correct output would be for arbitrary input underlies "machine learning" or "artificial intelligence".

----------


## desiresjab

1, 3, 5... 

Add this string up and you get a square. You always get a square no matter how long or short you make the consecutive string of odd numbers.

Also, all squares are the sum of two consecutive triangular numbers.

Also, cube each consecutive integer and add them up. Then add each consecutive integer again and square the result. In other words:

13+23+33...=(1+2+3+...)2

Also, the difference between the squares of two consecutive triangular numbers is a cube.

Also, since every square number is the sum of consecutive odd integers, so is the square of a triangular number.

This last one could be very important to us, since we have the sum of some odd integers in our derived equation.

----------


## YesNo

> 1, 3, 5... 
> 
> Add this string up and you get a square. You always get a square no matter how long or short you make the consecutive string of odd numbers.


This one makes geometric sense. Start with 1 dot and then to get the next square add a dot to the left and the bottom plus one in the corner. In general if one has a square of side n dots, then one needs n dots on the left side and n dots on the bottom and then one dot in the corner to get the next larger square. That would be 2n + 1 extra dots added to the n2 dots already there. The previous square would have needed n - 1 dots on the side and the bottom and one on the edge or 2n - 1 dots. So the sequence of squares is the sum of the odd integers.




> Also, all squares are the sum of two consecutive triangular numbers.


For this one use the closed form of the triangular number, T, as T(T+1)/2. Then algebraically add the closed form of T + 1 to it. One should get a square form.




> Also, cube each consecutive integer and add them up. Then add each consecutive integer again and square the result. In other words:
> 
> 13+23+33...=(1+2+3+...)2


I don't see an explanation for this one. I didn't look for counterexamples.




> Also, the difference between the squares of two consecutive triangular numbers is a cube.


I don't see an explanation for this one either, but I did check it for n < 16.




> Also, since every square number is the sum of consecutive odd integers, so is the square of a triangular number.
> 
> This last one could be very important to us, since we have the sum of some odd integers in our derived equation.


This last one doesn't seem to be correct. So, I will look for a counterexample. Try 4 = 1 + 3. That one works. Try 9 = 3 + 5? That doesn't work, so 9 = 32 is a counterexample.

Edit: I see it now. It is not the sum of two consecutive odd integers but the sum of the odd integers starting with 1 up to some point.

----------


## desiresjab

> This last one doesn't seem to be correct. So, I will look for a counterexample. Try 4 = 1 + 3. That one works. Try 9 = 3 + 5? That doesn't work, so 9 = 32 is a counterexample.
> 
> Edit: I see it now. It is not the sum of two consecutive odd integers but the sum of the odd integers starting with 1 up to some point.


Sorry, I did not word that one very well. 

The part that you add to get the next and the next and the next figurate number, the Greeks called the _gnomon_. It is actually a useful word.

With an unending proliferation of relationships, it is no wonder one runs into sudden connections when dealing with figurates.

Every other triangular number is a hexagonal number.

Every pentagonal number is 1/3 of a triangular number.

All even perfect numbers are triangular Tp with prime p.

666 is the largest repdigit triangular number (Bellew and Weger, 1975).

The latter would have been a great problem to work on, if it were not already solved. I envy the people who got to solve it. That is my kind of problem.

The sum of all the triangular numbers up to the _n_th triangular number is the _n_th terahedral number.

The sixth heptagonal number minus the sixth hexgonal number is the fifth triangular number.

It seems triangular numbers are generators for any figurate number where gnomons are applicable. If you know the right triangular numbers you can calculate all the other figurates, it does seem.

----------


## desiresjab

Well, I guess I am not banned.

----------


## desiresjab

And finally:

1=13, 3+5=23, 7+9+11=33, 13+15+17+19=43,...

This about takes the cake, or is the frosting on the cake.

The additive properties of numbers and their multiplicative properties being friendly but not related by family is part of what keeps numbers so mysterious. There is still a lot of work left to do in the additive properties. Unfortunately, none of it will be accesible to civilians the way the properties of triangles and squares are. I doubt if elliptic equations will become common to people. That is about as likely as eighth graders of the future comfortably reading Finnegan's Wake.

Most of the properties of figurate numbers seem to be additive. There is the exception that the product of sums of squares is also a sum of squares.

In the factorial problem one of the factors of (2n)! is a product of triangular numbers. This just hit me in the head that I have set that problem up wrong. We do not have an upward double factorial. That is the labeling! What we have is an upward sequence of triangualr numbers multiplied together. Excuse me again.

----------


## desiresjab

> And finally:
> 
> 1=13, 3+5=23, 7+9+11=33, 13+15+17+19=43,...
> 
> This about takes the cake, or is the frosting on the cake.
> 
> The additive properties of numbers and their multiplicative properties being friendly but not related by family is part of what keeps numbers so mysterious. There is still a lot of work left to do in the additive properties. Unfortunately, none of it will be accesible to civilians the way the properties of triangles and squares are. I doubt if elliptic equations will become common to people. That is about as likely as eighth graders of the future comfortably reading Finnegan's Wake.
> 
> Most of the properties of figurate numbers seem to be additive. There is the exception that the product of sums of squares is also a sum of squares.
> ...


I have not made a mistake at all, except to think I made one in the first place. Double excuse me, and the double factorial is still on!! Yes, I derived it that way. I am happy again, but too tired to think math tonight, perhaps..

----------


## desiresjab

> I have not made a mistake at all, except to think I made one in the first place. Double excuse me, and the double factorial is still on!! Yes, I derived it that way. I am happy again, but too tired to think math tonight, perhaps..


What can I say? I am making mistakes left and right. Mistakes can eventually lead to the truth. Careless algebra on a word processor not meant for it, instead of doing the algebra on paper first, is the main reason for the mistakes.

Calculation instead of algebra has shown that within the brackets of the formula

*(2n)!=2n[(n!)·(2n-1)!!]* 

we have a full double factorial instead of a partial one. I do not know if this will make a difference, but it is certainly more neat with the notation and more pure of form. I do not like mistakes, but I like this result. This time it is byond dispute, for indeed

*10!=25(5!9!!)*. 

Awfully neat and suggestive, but I still have no suggestions.

----------


## desiresjab

*10!=25(5!9!!)* and

*10!=25(T1·T3·T5·T7·T9)*

Really cool.

----------


## YesNo

> I doubt if elliptic equations will become common to people. That is about as likely as eighth graders of the future comfortably reading Finnegan's Wake.


Not only eighth graders. When it comes to something like Finnegans Wake I ask myself would I rather spend my time trying to understand that or trying to understand quantum physics or gravitational theory or elliptic curves or Brocard's problem or Sierpinski's problem? If one wants an impossible task one might as well choose an interesting one. It might turn out not to be so impossible after all.

----------


## YesNo

> *10!=25(5!9!!)* and
> 
> *10!=25(T1·T3·T5·T7·T9)*
> 
> Really cool.


I checked it in a Google sheet. It worked.

----------


## desiresjab

I keep asking myself, "How did they know how many factors of 2 to draw out front of the cap pi? We can reverse engineer the forumla pretty easily for 10 simply by taking a factor of 2 from each even number 10 or below. Then we notice two factors of 2 remain above 5 and between 10 , and tqo factors are missing below five. Easy replacement. But how did they know to do it? They would have no reason to do that unless they had a clue from somewhere else.

They derived their formula from something.. "Our" formula came from somewhere too. I want to get to their formula for the above one.

* * * * *

Trying to solve unsolved problems, one lives for such moments-- side trips through wonderland. One understands one will not solve the problem. In trying anyway one runs into questions asked by one's self which open up new vistas and allow learning to go on in the approximate area of the problem when progress has stalled. It happens every time. Now I have this new question to answer, and I will not stop until it is answered. Where is the connection between triangles, factorials and double factorials? Even when I know, I may not be any closer to solving Brocard, but I will have better connections among numbers.

----------


## desiresjab

I learned a little more. Apparently, double factorials were not even around as a function until roughly mid _20_th century. Bringing those two out front of cap pi must have been someone's idea when defining the function. The function is pretty natural, if you look at 

*n!!=2kk!* for even numbers, and

*n!!=(2n)!..=..(n!)....
.......2kk!......(n-1)!!*

for the odds.

Very reminiscent of what I derived from (2n)! These are attractive to me. I think they look good. 

Even and odd double factorials have to be defined separately. Odd ones can weirdly be extended to negative values.

----------


## YesNo

It doesn't seem obvious to me either why someone would factor out those 2s. However, the exponent makes sense. There will be at least n/2 factors of 2 in n!.

However, if one is multiplying triangular numbers together to see what relationships pop up, someone might have seen that the difference between them and the n! is a curious factor of 2 to the n/2 power and guessed the relationship. The Greeks would have been thinking along geometric lines rather than algebraic ones.

----------


## YesNo

> *n!!=2kk!* for even numbers


This formula makes intuitive sense. The n!! is n(n-2)(n-4)...2. Factor a 2 out of each of these n/2 factors and you get the 2n/2(n/2)!. Let k = n/2.

----------


## desiresjab

> This formula makes intuitive sense. The n!! is n(n-2)(n-4)...2. Factor a 2 out of each of these n/2 factors and you get the 2n/2(n/2)!. Let k = n/2.


It does make intuitive sense. It is the way we ended up doing it by simply removing a factor of 2 from each even number. But we knew what we were after. We were simply trying to manipulate the formula for (2n)! by manipulating (2n)! to see where we would arrive, i.e. if we could arrive back at the forumla given for it including triangular numbers. Of course we didn't, we ended up at our factorial times double factorial thing, which obviously _is_ that product of triangular numbers in disguise.

Here is the difficulty right now. For odd numbers, when I remove n factors of 2 from the numbers above and below n, I need a proof that the number of such factors remaining above n is exactly the amount needed to replace those taken from below n. This part is not intuitively clear to me and I believe a proof must have been provided at some point.

The key to making it intuitively clear may lie simply in studying the function on even numbers, which I have not done yet. I need a little more time with all these formulas. I am glad there is something to sort out--we cannot grow unless there is. Do I believe this digression will be helpful with Brocard? Probably not, but it is increasing our understanding of numbers, and that _will_.

----------


## desiresjab

I have to be away for a few days again. I see the way to settle the question perhaps. If the total 2's in 2n minus the trotal 2's in(2n-1) equaled n and was proven algebraically, I suppose that might do it. Go ahead and prove that while I am gone if you have a mind to.

----------


## desiresjab

Here is what I know for sure. The higher powers of 2 between n and 2n have to equal the lowest power of 2 between 1 and n. In other words, for n=5, after removing that first layer of 2's there is still 22 left where 8 was. 22 is exactly what was skimmed off between 1 and n. I am asking how they know this to be true in general. I am probably missing something quite basic. Maybe I will find it before I leave for a few days off.

----------


## desiresjab

Specifically, it needs to be shown that


i=2Σ2n↓2n/2i↓=↓n/2↓

where the arrows indicate the floor function, which is always rounded to the lowest whole value. This may prove to be a lousy way to set the problem up to find the answer. I just know it is correct. The value on the left (for 2n) is without its lowest and most numerous power of two, the value on the right (for n) is calculated only for the value of its lowest power of 2. The two should be equal.

It also says that the factors of 2 in 2n! is equal to 2n, doesn't it? In other words there are n such factors. Which is what we are trying to show. It is a nifty equality. One has to internalize that.

Well, one should prove it first, even though it is known to be true. I cannot see it intuitively, but I believe it is seeable that way.

----------


## desiresjab

I guess I do not see that intuitively--yet, at least. It is what I said was intuitively clear a few posts ago. But I do not see that the remaining number of 2's between n and 2n after the first layer is peeled off should always be equal to what was peeled off between 1 and n. But maybe that fact is just a consequence of the law. No, it is the law in slightly different expression.

----------


## desiresjab

A consequence of this would be that whenever n is a power of 2, (n)! will have (n-1) factors of 2. A power of 2 factorialized always has a value of one less than 2k. 4! is guaranteed to have three factors of 2, 8! is guaranteed to have 7, etc. People who have worked in binary know this.

----------


## YesNo

For even n, n!! skips every other factor one would normally see in n!. That is, it looks like this product: 2*4*6*...*n. There are n/2 factors. Now remove 2 from each of those factors. You get 2n/2(1*2*3*...*n/2) = 2n/2(n/2)!. Let k = n/2 to simplify the notation and you get 2kk!.

For odd n, n!! skips just like for the even n. It looks like this product: 1*3*5*...*n. This is the same thing as multiplying all the numbers together less than or equal to n and then dividing out the even ones: (1*2*3*...*n)/(2*4*6*...*(n-1)). But that is n!/(n-1)!!. Note that n-1 is even since n is odd. We already have a way to write n!! when it is even and so we get the following for n odd: n!! = n!/(n-1)!! = n!/(2(n-1)/2((n-1)/2)!). Let k = (n-1)/2 and this simplifies to n!/(2kk!).

----------


## YesNo

I was thinking about those double factorials. They are a way to split a factorial into the even factors and the odd factors in this manner:

n! = n!! (n-1)!!

For the even n, we can factor out the n/2 powers of 2 and continue the process. For example, if n is even then n!! = 2n/2(n/2)! Now take the (n/2)! and split it into even and odd factors. Repeat the process until all the factors of 2 are removed from n! leaving 2 to some power and the odd factors.

I suppose one could also do this for other primes but this notation is specific to 2. What one gets is the prime factorization of n!.

----------


## desiresjab

On my trip it became crystal clear where the oversight lay that was causing my disagreement with the formula. Math requires lots of solitude and good marijuana. I had overlooked the detail in my reckoning that the lower half of the double factorial was being filled in with odd small numbers. Here is the formula again:

(2n)!=2n[n!(2n-1)!!].

Only _n_ factors of 2 are needed to produce the full double factorial with its complete lower end because that is exactly how many factors of 2 are contained in the upper interval. I have not seen it proved anywhere that the upper interval will always contain exactly _n_ factors of 2, but having now examined the mechanics of the details, I fully accept that it does and has been proven elsewhere, probably by induction. It is one of those facts of life of numbers I did not know. There are plenty of them I did not know before but came to understand. I like this formula and I am glad we stopped to consider it. Whether it can help in the quest for Brocard awaits more solitude and marijuana. I believe there is a connection and I believe I see a hint of it through the smoke, sir. It is really a matter of connecting notations with the problem.

Maybe more vision will enable me to see why the part of the formula in brackets ( the factorial times the double factorial) is equivalent to multiplying the oddly labled traingular numbers together. I look forward to seeing that, and I suppose it is what I must stick with for the moment.

----------


## YesNo

I was trying to make sense out of the abc conjecture. I think I have a basic understanding of what it is trying to say primarily from this wikipedia article: https://en.wikipedia.org/wiki/Abc_conjecture

What I don't see at the moment is why it implies that Brocard's problem has only finitely many solutions, but this reference is supposed to provide the key: http://www.mat.univ.szczecin.pl/file...dramanujan.pdf

----------


## desiresjab

> I was trying to make sense out of the abc conjecture. I think I have a basic understanding of what it is trying to say primarily from this wikipedia article: https://en.wikipedia.org/wiki/Abc_conjecture
> 
> What I don't see at the moment is why it implies that Brocard's problem has only finitely many solutions, but this reference is supposed to provide the key: http://www.mat.univ.szczecin.pl/file...dramanujan.pdf


I do not see it clearly yet, either. One first needs to study Diophantine equations to get the basics of their solution and become familiarized with the usual methods in the field.

I am still trying to figure out the triangular connection to the other problem, but I am looking at this problem too. I would like to understand the abc conjecture better, so I suppose I will. You never know when the insight will come, except that it will be when you are concentrating your best.

----------


## desiresjab

As an intersting extension of the (2n)! problem, I have realized it is possible to determine the exact number of factors of 2 in the interval (1, 2n) without recourse to the floor function which is the the usual manner. I have derived by observation the piecemeal formula that works for all values of x≥1, and I would have no need of the floor function, then, to determine the number of 2's in virtually any factorial. If I do not lay it out in a list it will be harder to conceptualize. Near the end of the list I realize I need only include even values in the range (1, 2x) to get a formula. 

(2·4)!=8·7·6·5║·4·3·2·1
...........3....1......2....1


(2·5)!=10·9·8·7·6·║5·4·3·2·1
............1.....3....1......2....1


(2·6)!=12·11·10·9·8·7║·6·5·4·3·2·1
............2.........1....3......1.....2...1


(2·7)!=14·13·12·11·10·9·8·║7·6·5·4·3·2·1
............1........2.........1....3......1....2. ...1


(2·8)!=16·15·14·13·12·11·10·9║·8·7·6·5·4·3·2·1
............4........1........2........1.......3.. ..1....2....1

(2·9)!=18·16·14·12·10║·8·6·4·2·1
............1...4....1...2...1...3.1.2.1


(2·10)!=20·18·16·14·12·║10·8·6·4·2·1
..............2...1...4....1...2....1..3.1.2.1


Certain patterns now appear.

1 for even x, the two partitions have the same number of elements

2 For odd x the upper partition gets a bonus factor of 2 among its elements.

3 Every number divisible by 2 in x≥(x/2) has a double in the upper partition with exactly one more factor of 2.

4 Where x is a power of 2, the lower partition has one less factor of 2 than the upper, so the count in this special case is easy. The general method below, however, still applies.

5 To determine y in the equation below, merely count how many even elements in x are≥(x/2), and add one to the value of y for each case. If x is odd, we add one additional bonus factor, and we are done.

2x+y+k, where y is the number of even elements in the interval (1, x) that are ≥(x/2), and k=0 when x is even, k=1 when x is odd:

----------


## desiresjab

We already knew we needed no floor function to determine the number of factors of 5 in a factorial (for we simply count the number of 0's in the tail), but now now we have a way to avoid the floor function to determine the total number of factors of 2, as well. The fact that it is simpler and works for *all* factorials is impressive. It is probably an easy consequence of things I already knew, but I never put this together until now.

----------


## YesNo

> As an intersting extension of the (2n)! problem, I have realized it is possible to determine the exact number of factors of 2 in the interval (1, 2n) without recourse to the floor function which is the the usual manner. I have derived by observation the piecemeal formula that works for all values of x≥1, and I would have no need of the floor function, then, to determine the number of 2's in virtually any factorial. If I do not lay it out in a list it will be harder to conceptualize. Near the end of the list I realize I need only include even values in the range (1, 2x) to get a formula. 
> 
> (2·4)!=8·7·6·5║·4·3·2·1
> ...........3....1......2....1
> 
> 
> (2·5)!=10·9·8·7·6·║5·4·3·2·1
> ............1.....3....1......2....1
> 
> ...


I checked this with Python. My program may be wrong. 

I don't know if it generalizes as you are suggesting or not.

Here is the code:

from math import factorial
from sympy.ntheory import factorint

for i in range(1,11):
b = factorint(math.factorial(2*i))
print("2 times",i,"factorial has",b[2],"factors of 2.")

Here are the results:

2 times 1 factorial has 1 factors of 2.
2 times 2 factorial has 3 factors of 2.
2 times 3 factorial has 4 factors of 2.
2 times 4 factorial has 7 factors of 2.
2 times 5 factorial has 8 factors of 2.
2 times 6 factorial has 10 factors of 2.
2 times 7 factorial has 11 factors of 2.
2 times 8 factorial has 15 factors of 2.
2 times 9 factorial has 16 factors of 2.
2 times 10 factorial has 18 factors of 2.

Here are some further values:

2 times 11 factorial has 19 factors of 2.
2 times 12 factorial has 22 factors of 2.
2 times 13 factorial has 23 factors of 2.
2 times 14 factorial has 25 factors of 2.
2 times 15 factorial has 26 factors of 2.
2 times 16 factorial has 31 factors of 2.
2 times 17 factorial has 32 factors of 2.
2 times 18 factorial has 34 factors of 2.
2 times 19 factorial has 35 factors of 2.
2 times 20 factorial has 38 factors of 2.

----------


## desiresjab

Looking again at the bracketed part of the formula

(2n)!=2n[n!(2n-1)!!]

for the equivalence between that and triangular numbers, let us make a set of the oddly labled triangular numbers

(1, 1+2+3, 1+2+3+4+5, 1+2+3+4+5+6+7,...)=

(1, 6, 15, 28,...)

Each new element adds the next two successive integers, which means we are adding an odd number to the total already there, so the parity of the set strictly alternates. We only need to show that the above set multipled together has identical prime factorization as the part of the formula in brackets. 

1, 2 and 3 of the lower factorial are got from 1 and 6. The 4 comes from the 28, leaving a 7. This 7, along with the 5 and the 3 in the prime factorization of 15, completes the double factorial starting from (2x-1), as the 1 at the end is superfluous as a multiplier and does not change the product.

We have shown it, now we need to prove it for every case.

We note that the above ordered set shares alternating parity as a feature with the factors of a factorial. We note further that factors of prime n are introduced on the nth element of the set of oddly labled triangular numbers. We note that any remaining numbers after these factors are factored out of the triangular numbers are odd, and along with factors still remaining in the triangular set, form a double factorial beginning on 2x-1.

That is as close as I can get to a proof right now.

----------


## YesNo

Here is a paper discussing the relationship between factorials and triangular numbers. I have only skimmed the introductory part, but it looks like it is discussing a similar problem to the one you are addressing: http://www.integers-ejcnt.org/l50/l50.pdf

He called the factorials the "additive analogs of factorials" which makes sense, but I did not think of it that way before.

----------


## desiresjab

> I checked this with Python. My program may be wrong. 
> 
> I don't know if it generalizes as you are suggesting or not.
> 
> Here is the code:
> 
> from math import factorial
> from sympy.ntheory import factorint
> 
> ...


What I meant to say was that the second interval has an extra factor over the first interval for every even element from 1 to x for which x≥(x/2), plus one more if x is odd. Subtract that amount from the second interval, and you have the amount in the first interval. Then you add the first and second intervals together for the total. The algorithm has to work but I am usually messy or plain incorrect with such notation out of the blocks. The verbal correction is right. I will get to the notation later.

----------


## desiresjab

> Here is a paper discussing the relationship between factorials and triangular numbers. I have only skimmed the introductory part, but it looks like it is discussing a similar problem to the one you are addressing: http://www.integers-ejcnt.org/l50/l50.pdf
> 
> He called the factorials the "additive analogs of factorials" which makes sense, but I did not think of it that way before.


That looks like a good paper. I don't think I have read it before.

But right now let us peek at the first part of the abc conjecture in your link. I was able to figure out the first part for A as a non-square so far.

As long as the prime p≤n, p will divide n!

This means n!≡0 (mod p). So in the expression

n!+A≡m2 (mod p), if n!≡0, then A≡m2. That is why n!+A≡m2 (mod p), as he says, reduces to A≡m2 (mod p).

Bounded from above by a constant means bounded from above by a horizontal line, so that only a finite number of solutions in whole numbers could lie between that line and the input values themselves.

I hope that helps if there was any confusion on that part. I will try to get to the far longer and more difficult looking part later. I have to leave for another few days soon.

----------


## YesNo

> This means n!≡0 (mod p). So in the expression
> 
> n!+A≡m2 (mod p), if n!≡0, then A≡m2. That is why n!+A≡m2 (mod p), as he says, reduces to A≡m2 (mod p).


That would be one way to approach searching for more solutions. The term n! + A would have to be a quadratic residue modulo primes larger than that value for it to be equal to a square.

----------


## desiresjab

My interpretation of that first proof was that it eliminated non-quadratic A's from consideration by showing they contradicted the earlier assumption. But perhaps they _assumed_ there was a solution to show there were finitely many more, at best. Some of the attendant conjectures to the problem will have to be understood in order to understand abc. For instance, I see the algebra perfectly, but I do not see at all yet how their proof for the non-quadratic variety of A bounded it from the top by a constant. Seems to me the constant would be zero, because there would be no solutions of that variety, is how I take it.

This means we are riders in the same boat when it comes to grasping the connection between Brocard's problem and the finite solution set implied by the abc. What I showed was the extent of my reading of that article so far. I will do more, but I have to find the time. It appears that to get a good grasp of the abc I will need to pick up a lot of additional information and skills. Fine. That is how I do it anyway. Problems within the problems I select always force me to the books to learn more. I understand there are problems completely beyond my brain power to even get a good purchase on. I wonder if this is one of those, or if we can get close to the understanding the big boys and girls have?

----------


## desiresjab

As I mentioned, there are many ancillary propositions and ideas to understand before one can have a solid amateur's grasp of the abc conjecture. Szpiro's equation looks like a horror to grasp. I don't even know what e represents in it. Are they talking about the famous e of calculus, or have they assigned something else to e? Every single element of the proof must be grasped to have the understanding we want, but this relies on many other propositions that support this one. And of course the radical is not the radical from high school algebra.

This problem is a bit different from the style of number theory I have worked on. It brings in many basics, like the GCD. They have to be at your command. To be perfectly honest, I had forgotten that any integer not of the form 4t+2 could be expressed as the difference of two squares. The article reminded me of that. I believe it is necessary to have everything one knows about squares, GCD's, LCM's, divisor functions, Euler's phi function, the Euclidian algorithm, the extended Euclidian algorithm _et al_, at one's fingertips during such an investigation. Otherwise, one can spend a great deal of time and frustration on aspects of a problem that would be obvious if one had only remembered some basic function or proposition from earlier in one's studies. I like to avoid painful algebra when I can. It cannot always be avoided. I am delving downward on this one. Inch by inch, progress will show--I hope, at least.

In the meantime, I think I will go over and peek at your link on factorials and triangular numbers. That subject is almost like recreation now, compared to the foreign difficulty of the abc. I have a reasonable command of triangles and factorials as cohorts now. I expect to understand the article with much greater ease than I would have before our own investigations, if I could have at all, that is. I am hoping to see a proof out of him that I can follow.

----------


## YesNo

The link on factorials and triangular numbers gives a proof of what you were trying to show for even factorials. Basically, think of a triangular number written in closed form. That is, the sum of the first n numbers can be represented by n(n+1)/2. Since we skip the even triangular numbers and get rid of the 2 in the denominator with the 2n/2 factor, this looks like it should prove the result.

I am beginning to understand the abc conjecture. Take relatively prime integers a + b = c. Define rad(abc) = rad(a)rad(b)rad(c) to be a product of the unique primes in the product abc. This product is typically larger than c, but sometimes it is not. Sometimes rad(abc) < c and there are infinitely many triples (a,b,c) such that rad(abc) < c. However, there are (so I hear) no known example where (rad(abc))2 < c. That is if we raise rad(abc) to some power larger than 1, no matter how small that "larger than one is", then we get only finitely many triples that would work, that is, (rad(abc))1+epsilon < c has only finitely many triples (a, b, c) for which that relationship holds.

That is the abc conjecture. For this to apply to Brocard's problem, we would need to establish that for all n, n! + 1 = m2 is such that (rad(n!)rad(1)rad(m2))1+epsilon is always less than m2 for some epsilon. Of course, I might be misunderstanding all of this.

----------


## desiresjab

> The link on factorials and triangular numbers gives a proof of what you were trying to show for even factorials. Basically, think of a triangular number written in closed form. That is, the sum of the first n numbers can be represented by n(n+1)/2. Since we skip the even triangular numbers and get rid of the 2 in the denominator with the 2n/2 factor, this looks like it should prove the result.
> 
> I am beginning to understand the abc conjecture. Take relatively prime integers a + b = c. Define rad(abc) = rad(a)rad(b)rad(c) to be a product of the unique primes in the product abc. This product is typically larger than c, but sometimes it is not. Sometimes rad(abc) < c and there are infinitely many triples (a,b,c) such that rad(abc) < c. However, there are (so I hear) no known example where (rad(abc))2 < c. That is if we raise rad(abc) to some power larger than 1, no matter how small that "larger than one is", then we get only finitely many triples that would work, that is, (rad(abc))1+epsilon < c has only finitely many triples (a, b, c) for which that relationship holds.
> 
> That is the abc conjecture. For this to apply to Brocard's problem, we would need to establish that for all n, n! + 1 = m2 is such that (rad(n!)rad(1)rad(m2))1+epsilon is always less than m2 for some epsilon. Of course, I might be misunderstanding all of this.


Continued misunderstandings to overcome are what I depend on to inch along. Which reminds me to retract all my glorious declarations of a new way to find the 2's in the interval (1, n). It does not work. I am very curious about that, and for a while may be delayed there. Is it even a solved problem? That is, is there an explicit, dependable formula relating the number of 2's in each partition to each other with perhaps an exception or two for some particular classes of number.?That is what I was trying to do. Working late into the night after traveling, I did not even see the exceptions popping up in my list. I plead fatigue blindness.

I have to decide what interests me most right now. That is the other way I inch along--by taking on only what interests me. The abc is new and different and a huge bite, but I do not mind, because I was only avoiding it because previous cursory inspections of it had left a definite impression of a problem which itself was very difficult to understand. What was the problem, and what was it trying to acheive? That is where we are spending our time right now--trying clearly to figure out what the problem is, and after that to figure out precisely how these bounding ideas which play so large a part in their discussion suggest there are at most finite solutions.

I have to wonder if they are using calculus to squeeze out these bounds but giving an overview strictly in terms of algebra. I did not see an integral sign or a differential in the entire discussion. Usually I do not see them, but I know they are being used in much of the work. For reasons not clear to me top researchers often use the complex number system combined with calculus, simply called complex analysis, to research many of the major problems in number theory. When you see explanations of the Reimann hypothesis you seldom see anything to indicate the complex number system, but for a fact the Reimann hypothesis is a statement in the complex number system.

----------


## YesNo

I haven't finished reading the various papers on the multitude of topics we have discussed. Usually I need to read them multiple times over a period of days before some dream clarifies what is going on even if I do finish one of them. Most I haven't even read once.

I think the Riemann hypothesis uses complex numbers because the sum of reciprocals of the primes converges while the sum of the reciprocal of integers does not and that converging sum of prime reciprocals can be extended to the zeta function which is defined over the complex numbers. The only advantage of the Riemann hypothesis (that I see at the moment) is getting a bound on the estimate of the number of primes in a range. That provides another way to disprove the hypothesis: find a range with more primes than the hypothesis says should be there.

The abc conjecture probably does not involve calculus to my knowledge at the moment. However, the epsilon is taken from calculus as a standard symbol for some small value that is determined by the choice of another small quantity called delta. If epsilon is 1 in the abc hypothesis, then the exponent for rad(abc) is 1 + epsilon = 1 + 1 = 2. The abc hypothesis would say that given epsilon = 1 there are only finitely many triples of integers, a + b = c, such that rad(abc)1 + 1 < c. At the moment that finite number is unknown but it could well be 0, because no examples are known. If I remember right, the triple requiring the largest epsilon would have the exponent about 1.66 or so. That is epsilon would be 0.66. 

This leads to a computational problem similar to finding larger and larger primes: find the triple with the largest epsilon. It must exist since there are only finitely many triples (a, b, c) that make the inequality (rad(abc))1+epsilon < c. In order to measure which is the top triple the idea of "quality" is defined. One could just say the quality is c/rad(abc), which directly compares the two values of interest, but that leads to rather large numbers and so logs are put around the numerator and denominator and so the quality becomes q(a,b,c) = log(c)/log(rad(abc)). At least that is what I think motivates that definition.

The hardest part of the computation is that one needs to be able to factor a, b and c to find the distinct primes so one can compute the rad of those numbers which is the product of the distinct primes in those co-prime integers. Computations are limited by the numbers we can factor in a reasonable amount of time.

----------


## desiresjab

I have not finished everything either. I loaded too much on the plate and now I feel myself getting tired. Absolutely it takes multiple times reading through these articles to tease out understanding. I get a scrap here and a scrap there. What I realize because I know myself is that I will keep after various aspects of these problems like a fanatic, pushing myself, because they have challenged me now. These problems taunt me, saying I am not even intelligent enough to understand them correctly, and I accept the challenge on tentative sea legs.

A person feels there must be an upper bound on their own intelligence, too. Yet one has had a lifetime to reason out that people are bad judges of their limits and should dismiss all limits concerning their own potential from active duty for a better life. 

So, where shall this mathematical butterfly settle for a while, then, if I insist? The factorial/double factorial problem is still haunting me--not for its triangular connection anymore but searching for those 2' in the interval (1, n).

----------


## YesNo

While looking at how to search for additional examples of Brocard's problem, I found reference to Montgomery modular multiplication that looked interesting: http://www.hackersdelight.org/Montgo...iplication.pdf 

Even if the abc conjecture is true, and I assume it is, the next step would be to find all the finite solutions to n! + 1 = m2. One way to get that result is to find an upper bound on the possible solutions and then do a comprehensive search.

----------


## desiresjab

> While looking at how to search for additional examples of Brocard's problem, I found reference to Montgomery modular multiplication that looked interesting: http://www.hackersdelight.org/Montgo...iplication.pdf 
> 
> Even if the abc conjecture is true, and I assume it is, the next step would be to find all the finite solutions to n! + 1 = m2. One way to get that result is to find an upper bound on the possible solutions and then do a comprehensive search.


400,000,000,000!

If there are any more solutions, they are beyond that number, I believe I read. 

In the meantime, I have made progress. I am still working. My DeuceHound Model-2, nicknamed DH (mod 2), is nearly ready for mass production. I hope to have it on the shelves by mid Christmas season.

I must rule the ruler function.

----------


## YesNo

Yes, I think various people have checked it that far. But one could argue that that's not very far. There are only 11 digits in that number. Of course, putting a factorial on that number makes it rather large.

So, what does this DeuceHound Model-2 do?

----------


## desiresjab

> Yes, I think various people have checked it that far. But one could argue that that's not very far. There are only 11 digits in that number. Of course, putting a factorial on that number makes it rather large.
> 
> So, what does this DeuceHound Model-2 do?


Four hundred billion factorial might be a competitor for one of the largest numbers we ever had practical use of any kind for. 

The DH (mod 2), a deluxe line out of the DeuceMaster series, delivers the most comprehensive factorial unwinder on the market today right to your virtual doorstep.

----------


## desiresjab

In the relationship between the number of factors of 2 in the interval (n+1, 2n) versus the interval (1, n), there is a chain of beautiful islands of stability within the integers that grow increasingly farther one from the next as increasing powers of 2 recede from our vision into our imagination. These orderly places are centered, in fact, around pure powers of 2.

*(A) Conjecture*: Whenever N in N! is a power of 2, i.e. some 2k in the expression (2k)!, the difference between the number of factors of 2 in the intervals (2k, k+1) and (k, 1), is one.

*(B) Conjecture*: The difference in the number of factors of 2 between the same two intervals when the number being factorialized is of the form 2k)+2, is always two.

*(C) Conjecture*: When any number of the form 2k)-2 is factorialized, the difference between the number of factors in the upper and lower intervals is k-1. The difference in the number of factors between the middle factorial and the lower factorial, is k.

*Proof of (A)*: Observe the ruler function sequence. It represents the number of factors of 2 of each successive even number.

1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 5 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 6...
 
The value of any number with an address of the forms 4n+1, 4n+2 or 4n+3 is fully known simply from the address. The only change occurs in numbers with an address of the form 4n. If we were considering all numbers instead of just the set of even ones, we would call them 8n numbers instead of 4n, and so forth, but we need only consider the evens, hence the reduced addressing.

But any address with the form of 4+8n, has a value of 3. Further, any address of the form 8+16n, has a value of 4; any address in the form of 16+32n, has a value of 5, and so forth for each successive positive integer. Only the first appearance of a number in the sequence represents a pure power of 2. The first appearance of 5 in the sequence represents the number 32 itself. All later appearances of 5 represent 32 wrapped up within a larger composite with factors other than 2. If we arrange the above sequence in columns, it will highlight the major properties.

1 2 1 3
1 2 1 4
1 2 1 3
1 2 1 5
1 2 1 3
1 2 1 4
1 2 1 3
1 2 1 6...

In fact, this will be the method of our proof. We can slide any length of symmetric grouping from later in the sequence over the top of an earlier one, and see for ourselves what is going on. From one pure power to the next, the entries are identical, except for the last.

1214
1213, the difference is one. Even if we break the entries 1 2 in half and slide the later portion over as in a subtraction, we get,

1 2

2
1, the difference is one.

It always works:

1 2 1 3 1 2 1 5
1 2 1 3 1 2 1 4, the difference is one. This proves directly that the 

difference is 1, in the number of factors of 2 between a number k and its double both factorialized, when k is a power of 2, and hence in the intervals (2k, k+1) and (k, 1).

This is all to state the obvious fact that if you multiply any number k times 2, the new number j has exactly one more factor of 2 than k. That would work as a proof, too.

----------


## desiresjab

*Proof of (B)*: In the shift from 2(16)! to 2(17!), the bottom intervals stay the same, but one more factor of 2 is introduced with the doubling of 17, which was not a member of the set up to 16. Therefore the difference between the number of factors of 2 in the inverval (2k, k+1) and (k, 1), is two, when the factorialized number is of the form 2(2n)+1. See below.


(2·16)!=32·30·28·26·24·22·20·18·║16·14·12·10·8·6·4 ·2·1
..............5...1...2...1...3...1...2....1....4. ..1...2...1..3.1.2.1

(2·17)!=34·32·30·28·26·24·22·20·18·║16·14·12·10·8· 6·4·2·1
..............1...5...1...2...1...3...1...2....1.. ...4...1...2...1..3.1.2.1


The proof is complete. Conjecture (B) is now *Theorem (B)*.

----------


## desiresjab

*Proof of (C)*:

See the operations below. Further observe that (2·8)! is of the form (2·2k) because it equals (2·23), with k equaling 3 in this case. Therefore (2·7)! is of the form (2·2k-1)!, and we have postulated there should be a difference of k between its upper and lower intervals in their factors of 2.

Note that when the center bar moves backward one position to the right in going from 16 to 14 factorial, the lower interval loses k factors of 2 from a total we already know. The upper interval, by law, will lose only one factor of 2 to its upper interval counterpart in the move backward, also from a total we know. The factors of 2 in the lower interval, then, would be 2k-1-k; the number of factors of 2 in the upper interval would be 2k-1. Subtracting one from the other we get:

(2k-1)-(2k-1-k)=k



(2·7)!=14·13·12·11·10·9·8·║7·6·5·4·3·2·1
.............1........2.........1....3......1....2 ....1


(2·8)!=16·15·14·13·12·11·10·9║·8·7·6·5·4·3·2·1
............4........1........2........1........3. ..1....2....1


*The proof is complete*. Conjecture (C) is now *Theorem (C).*

----------


## YesNo

> 1 2 1 3
> 1 2 1 4
> 1 2 1 3
> 1 2 1 5
> 1 2 1 3
> 1 2 1 4
> 1 2 1 3
> 1 2 1 6...


I think I can see the pattern you are trying to explain with these examples. This might result in a recursive algorithm that is simpler than whatever is used at the moment to find the number of factors of 2 there are in n!. But I don't know what people use at the moment.

----------


## desiresjab

Now we are able to look at any 

*(2·2k)!* or

*(2·2k±2)!*

and report by inspection alone the power of 2 in the prime factoriaztion of any of the three, as well as the number of 2's in each of their respective upper and lower intervals.

This is a big improvement over using the repeated floor function. Of course it is only for islands of predictable stability that grow ever more sparse out the number line. The factorial islands of stability for 2 are:

(3, 4, 5)!, 
(7, 8, 9)!, 
(15, 16, 17)!, 
(31, 32, 33)!, 
(63, 64, 65)!
(127, 128, 129)!...etc., etc., etc.

*The Master Factorial Unwinder (model ∞)*, after its launch, will show that these islands exist for any pure prime power and _a_ neighbor, not necessarily next door. Its larger task is naming *all* the powers of prime factors in the prime facrorization of a factorial by inspection, as the larger task of the *DeuceMaster (mod 2)* is to perform this for factors of 2.

----------


## YesNo

What values do you expect for various intervals? I'll test it with Python.

----------


## desiresjab

The Ruler Sequence is a pattern I have studied a fair bit. The key to any value in the sequence is simply its address. In the case of 4n+1. 4n+2 and 4n+3 numbers, we only need the street without the street number, so to speak. Determining the fourth column is trickier, but not really. 

3's will be found only at addresses that can be put, at lowest, in the form of 4+8n. (In other words, not some 2+4n.)

4's will be found only at addresses of the form 8+16n.

5's will be found only at addresses of the form 16+32n, 

and so forth.

I used to believe with you that the sequence could only be mastered in an algorithm. I hope to show that this is false, and to give a general forumla for finding the power of all primes in any factorial. An analog of the Ruler Function for each prime will be of great use, and can perhaps be put into one explicit forumla. That formula will be none other than the

*Master Factorial Unwinder (model ∞).*

* * * * *

By the way, here is how they calculate the factors of a prime in a number, shown for the prime 2 and the number 69. I calculate how many times each power of 2 that divides 69 (not necessarily evenly) goes into it, and add them all up.

64 goes in once, 32 goes in twice, 16 goes in 4 times, 8 goes in 8 times, 4 goes in 17 times, and 2 goes in 34 times. Add them up.

1+2+4+8+17+34=66. 

F2(69)=66

This is the procedure they use now, I mean to say. I hope for a great improvement. If not, finding the island chain of stability for 2's, proving its existence and then conjecturing these islands exist for any prime in a factorial, has been satisfying in itself.

----------


## desiresjab

Ask yourself: "Why should neighbors farther removed than next door from a pure power of 2 show any less stability and predictability than a next door neighbor does?" The answer is they shouldn't, it is just the next step in a pattern not yet fully identified but totally contingent upon the Ruler Sequence.

----------


## desiresjab

> What values do you expect for various intervals? I'll test it with Python.


I don't understand your question.

----------


## YesNo

> I don't understand your question.


I am going to try to implement your algorithm to make sure it is correct. However, I don't completely understand it at the moment, but some examples would be helpful. I will compare what I get from your algorithm with what I would get from constructing the intervals of the factorial and then factoring them.

----------


## desiresjab

Give me a while to think. When I rush is when I am excited and make all these mistakes. I may have to go back and edit some posts to amend the conjectures. If the conjectures are wrong, I will actually go back to those posts and make them right with an edit.

I will be able to extend the neighborhood, which is good news.

----------


## desiresjab

I have made some mistakes but I have straightened them out mentally. It is still valid and even better. I overlooked like a simpleton again that my values were for _even_ numbers factorialized which were on either side of a pure power of 2, which makes them even, as well. Bad but good. This allows me to get the values of the next door neighbors. I thought I was calculating for the next door neighbors before, but I wasn't, I was calculating for the next to the next door neighbors instead. This makes the in-betweens easy, which is the good news.

I will be back once I get every detail straight, not before!

----------


## desiresjab

Okay, I amended mistakes in previous posts (of which there were very few, it turns out) and am ready to present the beautiful chain of receding, stable islands, which is two factorials wider than we realized before.

Let's use some simple examples. Because 8 is a power of 2, we know 8!=(2·22)!, with the exponent equaling 2 in this case, we know that it will have 22 powers of 2 in its upper interval, and 22-1 powers of 2 in its lower half, for a total of 23-1 factors of 2. 

(2·3)!=6·5·4·║3·2·1
...........1....2......1

(2·4)!=(23)!=8·7·6·5║·4·3·2·1
........................................3....1.... ..2....1

(2·5)!=10·9·8·7·6·║5·4·3·2·1
............1.....3....1......2....1

* * * * *

Let (2·2p)=(2k)

Not only can we read directly the values for 10! and 6! by knowing the values for 8! (8 being a power of 2), but we can easily read between the lines now and get the values for 7! and 9!. The relationship of the difference between the number of factors of 2 in the middle even factorial and the lower even factorial (such as 8! and 6!) is always equal to k, when the k is from the pure power of 2, so that difference will not change, either, in going from 6! to 7!.

7! will look identical to the schematic above for 6! in terms of its eveness values, we can see. The factorial between 6! and 8! has the same disparity between its upper and lower halves as 6! does, and the same amount of factors of 2 overall, since when 7 is multiplied times 6! no new factors of 2 appear. Therefore 7! will have the same difference between its upper and lower halves, k-1 in the expression (2·2p)=(2k), as 6! has. The difference between factors in the middle even factorial and the lower even factorial is also equal to k. 

By my formula,10!, otherwise known as (2·5)!, should have a difference of two between its top and bottom intervals, and contains one more factor of 2 overall than 8! does. The calculations above, which are really only summations of increasing lengths of the Ruler Sequence starting from its beginning, allow us to fill in visually where 9! factorial would be and verify that its upper and lower intervals are identical to those for 8! for powers of evenness. The difference between the number of factors of 2 in the two succesive factorials 9! and 8! is 0. 

This proof is correct. Below, let me try to put the results briefly and clearly.

* * * * *

*(2k-2)!=*k fewer factors of 2 than 2k, and *Ui-Li=k-1*.

*(2k-1)!=*k fewer factors of 2 than (2k)!, and *Ui-Li=k-1*. 

*(2k)!=*2k-1 total factors of 2, and* Ui-Li=1*. 

*(2k+1)!=* 2k-1 total factors of 2, and *Ui-Li=1*.

*(2k+2)!=*2k total factors of 2, and *Ui-Li=2*.

The Demonstration is complete.

----------


## desiresjab

You have to admit, using my method as opposed to using the traditional one, would make calculating these values for large factorials a snap, as long as the factorial fell within our five-wide swath of islands. Even for a factorial as small as 16!, the traditional method would spend considerable effort over mine. Try it. Even if you knew the shortcut secret of the pure power's 2-count, calculating all those values for the surrounding four factorials without my tricks would require a near-Herculean effort. Remember, the number below is a tiny one.

Around every power of 2 factorialized, such a cluster of predictability exists. 

16!=20922789888000.

This one simple function of the *DeuceMaster (mod 2)* enables you to calculate the two different 2-counts (Overall and Ui) for any factorial in the swath from a few basic rules, while others wrestle with serpentine iterations of the *Floor Function*.

----------


## desiresjab

What if the formulas and procedures are similar or identical for the factorial swath surrounding the factorialzed powers of any prime, such as 3? This is what I am foreseeing. I still have to finish the DeuceMaster (mod 2), but I am eager to look for the island pattern in higher primes and see how it codes out.

Without precise algebra, it will never work out easily for higher primes. The factorial powers grow large so fast there is no time to look for a stable pattern. Rather, no way to verufy the results by a calculator with 32-bit decimal precision.

----------


## desiresjab

If we know any number, or any number factorialized, we can figure out its address and then its value. If the number is a power of 2 factorialized, we have theorems which enable us to be quite complete with all calculations concerning factors of 2 for this power and its four neighboring factorials--the two on either side of it.

Give me a power of 2 factorialized, and I can give you back its address, the addresses of its neighbors and all relevant values concerning the factor 2 in the positions of these factorials in the Ruler Sequence. That is childishly easy. The factorial itself will have an address of 4n. The factorial just under it will be a 4n+3 number and will have a positional value (number of factors of 2) of 1 in the Ruler Sequence. The factorial just under that will be a 4n-2 number with a positional value of 2. Just above the power of 2 factorialized, there will be a 4n+1 number factorialized, with a positional value of 1. Just above that will be a 4n+2 number factorialized, with a positional value of 2.

Gve me 10536209805943621, and I can tell you that its double factorialized (2x)!, has x factors of 2 in its upper interval, and how many in total. We may not know any tricks yet to get all the 2-count information from (2x)! and its neighbors, but we can do this much at least, with any number. We see, then, how it might be useful to obtain these values merely from both the magnitude of the address and the form it can be put in, for any number whatsoever.

----------


## YesNo

The Montgomery modular multiplication uses the idea that a factors of two in a binary computer system can be easily performed through bit movements. It is a way to speed things up. 

If you can do this with 2 you might be able to do something similar for any prime.

I am trying to think of some way to use Python to explore this more.

----------


## desiresjab

I just about have the entire formula. Until then, I can predict with minor calculations all the 2 values for a swath twelve factorials wide surrounding the pure power of 2 factorialized, but surrounding it asymmetrically so far. I can predict 4 values on the lower side and 7 values on the upper side of the pure power, from knowing the pure power, which itself makes 12. I may be able to extend this farther, to at least 16. If I can find one formula that covers any number, I say that is more compact, since the formulas I have seen are more like algorithms, like the Fiobinacci sequence--easy to write down the instructions to the next number, hard to write down an equation. 

However, proceeding in the same vein as before, I now easily have a swath 29 factorials wide, man! Fourteen on either side of the pure power. This is just what falls out naturally, and I have a thinking it may not be the limit of what falls out easily.

I could write out rules for these 28 neighbors of the pure power (now in symmetrical neighborhoods again) as simply and easily as I did previously for two neighbors on each side. But really, what I would do is plug them into a computer program which did the work, which I have another thinking would require less computer labor than the Floor Function method, though on the summation algebra for the Ruler Sequence, one is certainly reminded of The Floor Function, so I am hoping mine is not that function still wearing a mathematical disguise, and if it is, that at least it represents another method of finding the F2 for the numbers surrounding pure powers of 2 factorialized.

This is probably nothing new, I just had to figure it out for myself rather than look it up. Come to think of it, the Floor Function is really succinct. Gauss invented the function, so you know it is down to its most terse expression. Perhaps what I have in mind is at least different. I should be able to see that far ahead, but I can't quite. I need to figure a bit more for the whole thing.

Currently, I have extended the swath to a width of 93! consecutive factorials I can calculate F2 for, centered around the power of 2 factorialized position in the Ruler Function, if the factorial is large enough, like 64! is. Once you look at enough of the Ruler Function, you see the symmetry radiate outward from the virgin power of 2. 

That is, the first group surrounding the group of four the virgin power is in, ends in 3, the next group farther out on either side, ends in 4, the next group ends in 3 again, the next group ends in 5, therefore the next group must end in 3 again, since every other group does, etc., etc.

I am tired. I will write this all in later. I am close to my own formula, which may or may not be my own. LIkely not. I think the symmetry on top extends all the way up to the K-2 power, but I am not sure of this yet.

----------


## YesNo

I was thinking of calculating the factorization of n! by storing the factorization of (n-1)! and then adding the factors of n to that. It would take a lot of storage. Or store the individual factors of the numbers up to n and then summing the exponents for each prime less than n. Of course, people usually want algorithms that are fast and start from scratch without looking anything up.

----------


## desiresjab

I am looking for an explicit function to factor factorials. That is why I want something besideds the Floor Function.

1 2 3 4 5 6 7 8 
...2....4....6....8
...1....2....1....3
...2...22 ........23.......................24............................................25.

Above are four different metrics I am using.

*(20·1k)+ 20(k-1)+ 21(k-2)+ 22(k-3)+..+2k-2(k-k+1)]= F2{N!}

(1·6)+(1·5)+(2·4)+(4·3)+(8·2)+16·1= F2{64!)=F2{26!}=

6+5+8+12+16+16=63*.

----------


## desiresjab

What I am really doing is searching for an expression of factorials which might be more helpful than using a mere N! in Brocard's problem. This could allow me to see why adding one to a large factorial cannot (or can) produce a square.

----------


## desiresjab

The next task may be an impossible one--to do the same for the prime 3 as I have done for 2 above, and from there on to any prime. At the end I propose to add one and see how it looks. The specialness of 2 as the only even prime may mean it is the only prime that such a feat is possible with. Who would have thunk it, God even hid deep structures within something so seemingly innocent and innocuous as the set of even integers? 

Well, now we need to know how odd primes are packed into a factorial. We cannot use their oddness to help. Or can we? We know all these primes will be 4n+1 or 4n+3. Another expression I might use is 4n-1 for 4n+3. Same thing, since we are speaking (mod 4).

I thought of doing 3's out of a similar form (3n)!, but that does not help me in totally factoring (2n)! I need to extract the 3's from (2n)! Though indeed I might be able to get something from (3n)!, it would only do me any good when it came to factoring something like (6n)!, which is a composite of both. 

No need to defer the painful algebra, it has to come some time. Here goes not a swan dive but a belly flop, sir.

----------


## YesNo

For 3, you could split the numbers into two sets, those that 3 divides and those that it doesn't. Those numbers that 3 does not divide can be ignored. Those that 3 does divide, remove one factor of 3 from all of them. Take that set reduced by one factor of 3 and do the same to it: split it into two sets, those that 3 divides and those that it doesn't. Continue until there are no more numbers left to consider.

----------


## desiresjab

Can't do the algebra until you see the patterns. Here is a Ruler Sequence for 3's.

1 1 2 1 1 2 1 1 3 1 1 2 1 1 2 1 1 3 1 1 2 1 1 2 1 1 4
3 6 9........................27..............54....... ................81


The house word processor makes it difficult for me to put more than one space between letters, or I could highlight the action better by putting in all the numbers without the danged periods. If anyone knows how to do that, I would appreciate the tip.

Anyway, it occured to me that every odd prime would be modeled after this in its own metric, so to speak. The sequence for 5 would be:

11112..11112..11112..11112..11113...

For 7 it would be:

1111112..1111112..1111112..1111112..1111112..11111 12..1111113

Det it? Dot it. Dood.

I have a thinking I can model this pattern (which works for any odd number not just primes) into one formula, simultaneously including mastership over the only even prime 2. One formula to breaK any factorial of all its primes. One Ring to rule them all! One Ring to bind them!

----------


## YesNo

As a ring the binds them is a good way to look at getting a formula that works.

----------


## desiresjab

Without further adieu, I can unveil the DeuceHound (mod 2). The formulas I showed earlier are all compacted within the DeuceHound, and could be reproduced for further research, if needed. Yes, all those earlier formulas are elegantly folded--(ahem!)--into the DeuceHound formula.

Call a _Measure_ four units in the Ruler Sequence. That means eight units in natural numbers. The symmetry of the Ruler Sequence on both sides of a virgin power will be very important to us. We know the number of factors of 2 in (2k)! is 2k-1. We could figure them out by our earlier formula, but why do that when we know the shortcut? We only wanted to demonstrate that the earlier formula worked. Suppose we are given:

(26+31)! 

We know the numbers on either side of a virgin power in the Ruler Sequence are perfectly symmetrical, as far as they can be. From each highest power in a measure to the highest in the next, moves 4 units in the Ruler Sequence, or one _Measure_, but that is eight units in natural numbers. Remember, some units of the natural numbers are invisible--not represented in the Ruler Sequence because they are not divisble by 2, but they are there invivibly--so three measures moves us up 24 units in natural numbers. We know those will be 1 2 1 3..1 2 1 4..1 2 1 3..1 2 1 and represent only the even numbers with the invisible odd numbers in between. Moving up one visible unit in the sequence moves us up two units in real numbers. We add three full measures, then the first three quarters of another one. The number we seek, then, is 95, an invisible number. So, we simply need to add

(26-1)+7+8+7+4=63+26=89. F2{95!}=89

to get our answer, because we know F2{95!}=F2{94!}.

In this special case F2{(2k+Q)!}, Q=F2{(2k-Q)!}, because Q=(2k-1-1).

Now the general method. If Q is odd, subtract 1 from it, since it will have the same F2 as its lower neighbor.

*(2k-1)+F2{Q-Q (mod 8)}+F2{Q(mod 8)}*. 

63+F2{25}+F2{6}

63+22+4=89

The last term, if not zero, will now be an even number less than 8. This remainder tells us how much of a partial measure we have to add at the end, if any. We can use the middle term because, after all, the Ruler Sequence merely repeats itself from the beginning when you stop on a virgin power, up to any Q after that virgin power. The first term of the Hound is just F2{2k}=(2k-1), of course. 

Yes, friends, the Ruler Sequence not only repeats itself backwards from 2k, it also repeats itself forward from 1 with exactly the same numbers and forward from 2k with exactly the same numbers. Is that amazing, or what?????

Well, that is it, the *DeuceHound (mod 2)*. It may look complicated, but I can explain any detail, because it is simpler than it looks. I may have made a mistake somewhere which you can catch.

----------


## YesNo

I suppose we could take the 31 and write it as 24+15 and then continue doing that for 15.

How did you get the 7+8+7+4? I assume that was by looking up the results.

It does seem like a simplification. Rather than looking at numbers larger that 2k we can look at the numbers less than that. I don't know how this fits in with current factorization algorithms. Have you looked at any papers on this topic?

----------


## desiresjab

Now for the surprise I did not see myself until now. The DeuceHound (mod 2) simplifies further in an amazing way to this:

*F2{(2k)!}+F2{Q!}*.

Wow! How's That for understandable?

In case it is a new expression, you witness it here with a time and date stamp.

----------


## desiresjab

I am glad I discovered the formula in the last post naturally and last. That way I learned all the steps that go into it--how to expand it backwards into more detailed forms.

----------


## desiresjab

> I suppose we could take the 31 and write it as 24+15 and then continue doing that for 15.
> 
> How did you get the 7+8+7+4? I assume that was by looking up the results.


Not at all. I simply plugged in real numbers for k and Q in the general formula, before the formula evolved even further. Then it evolved beyond that to what I present in the next post after the one you are responding to. That is really simple and easy to calculate. You don't even see all the past formulas that are folded into it, except for the General Form Of DeuceHound you are responding to. That reduction is visible. That is what made me see it.




> It does seem like a simplification. Rather than looking at numbers larger that 2k we can look at the numbers less than that. I don't know how this fits in with current factorization algorithms. Have you looked at any papers on this topic?


Exactly. And now it is even easier to use. It all comes down to the palindromic nature of the Ruler Sequence. It has symmetries going in so many directions they confused me for a while. Then I realized what the sequence has done up to any perfect power of 2 it will repeat exactly on the way to the next virgin power, where it clicks over on the last number to the new virgin power instead of repeating the last virgin.

----------


## desiresjab

I have not looked at any or many factorization algorithms. I am too busy. I would like to look at everything, and if I were as I expect some of my descendants to be with a computer implanted in me and integrated with my consciousness, I could. I could not only read it all, I could digest and collate it all.

Math papers are hard to understand. Often those algorithms are for computer and half in computer language I do not understand. Soon I will be outfitted with a new computer with better capabilities. At that time I may look seriously into learning an OOP language good for math. My experience programming is ancienct and on a PO platform, procedural-oriented. I would have to learn object-oriented. I don't know how hard that is for someone with my procedural background.

----------


## YesNo

Try Python. You can get what you need to get started by downloading the Anaconda distribution. You will also be able to use Jupyter notebooks which contains mathjax. It allows you to format the mathematics more easily and is the way that I suspect is standard today for publication purposes. This mathjax is the same as what is being used on math.stackexchange.

----------


## desiresjab

Have you taken the DeuceHound (mod 2) for a test ride?

----------


## desiresjab

I believe I can do the same thing for 3 that I did for 2. Getting all primes together under one roof for the DeuceHound (mod ∞) will be a monster job. I do not know if it is possible. I suspect if it were, it would already have been done.

Keep your eye peeled for unsolved number theory problems for my collection where only a finite number of solutions are known. For Fermat's Last theorem there were two known solutions, 1 and 2. For Brocard's problem there are three known solutions, (4!, √25) (5!, √121) (7!, √5041). There must be quite a few unsolved problems with this setup where a few solutions are known and it is not known if there are any more.

----------


## desiresjab

Then there is the Beal conjecture where infinite solutions are known, but all of one variety--where A, B and C share a common factor. He offers $1,000,000 for a proff that only this type of solution is possible. He is a billionaire banker to the big cat oil men, and worth, they say) about $8 billion, who jusy happens to be a quality mathematician.

I don't think he would pay anything for my DeuceHound. I would likely be laughed out of the king's cour for bringing something common. But for a solution to Brocard's problem, especially one he could help tidy up with his technical knowledge, he might shell out a million, as well, or some amount of conch anyway. Personally, I do not have any way at all to approach _his_ conjecture. He loves number theory. I do have a way to approach Brocard's Problem, and the approach is improving. It seems like all the work done on Brocard is of the bounding type, or monsterous computer calculations that are able to promise no more soultions up to a certain limit. I will be trying to eliminate particular _forms_.

----------


## YesNo

If one could eliminate particular forms that would be something. I haven't tried implementing the DeuceHound 2 yet. I will try to build a jupyter notebook and send you the link to it. It looks like I would need to know the largest power of 2 that divides n! and then I can work with the remainder.

----------


## desiresjab

The *DeuceHound (mod 2)* is extraordinarily simple to implement. In plain English, you take the F2 of the perfect power of 2, which we know is always 2k-1, then add to it the F2 of Q, with Q being the amount you have factorialized beyond the perfect power. For instance

F2{(26+31)!}=

F2{26!}+F2{31!}=

F2{64!}+F2{31!}=

63+26=89.

----------


## desiresjab

The *DeuceHound (mod 2)* works for any two numbers one puts in for k and Q. 

The formula again is (and I would like to dispense with writing so many factorial signs, so I will introduce the new notation of F2! to mean all the factors of 2 in a factorialized number): 

F2!{2k+Q)}=F2!{2k}+F2!{Q}.

If anyone has not seen this perfectly, they only need to gaze at the Ruler Sequence for a while. I do not have to look at it to write it down. Neither will you, if you gaze at it carefully after what I have to say about it. From every virgin power, it simply starts all over again. There are all kinds of clever ways the DeuceHound could be used, but just stick with the basics for now, until you see it really does start over from any perfect power of 2. I will make it really easy, and show the Ruler Sequence up to 95 (26+31), which means it will end on an invisible number 95. That's okay. 94 is just fine for our purposes.

1 2 1 3..1 2 1 4..1 2 1 3..1 2 1 5..1 2 1 3..1 2 1 4..1 2 1 3..1 2 1 6..1 2 1 3..1 2 1 4.. 1 2 1 3..1 2 1 

The Ruler Sequence up to 94. From 64 to 95, the sequence is exactly the same as it is from 1 to 31. This is its nature. The Ruler Sequence is an awesome *Mandelbrotian Object*.

----------


## YesNo

Here's a pdf export of a jupyter notebook that confirms your result. I would like to make it more general. That is given n = 2343542, compute the number of factors of 2 in n! This just checks your example: https://drive.google.com/file/d/0B96...ew?usp=sharing

----------


## desiresjab

All you have to do is plug your own numbers into the *DeuceHound (mod 2)*. You have to figure a way to code that formula into Python, so you can plug values of your choosing into the DeuceHound for k and Q. I can also provide a formal proof, which is only to bring the formulas and reasoning from former posts together. However, the need for a lot of additions became unnecessary when I noticed that The Ruler Sequence repeated itself from the beginning every time a new power of 2 showed up. But as a throwback, remember when I said the difference between F2{(2k)!} and F2{(2k-2)!} would be k itself. That seems obvious enough, doesn't it? Through such reverse extrapolation we proved the entire premise before. Once my mind has moved on, it is difficult to recall, exactly. But I do know this, writing a formula was going to prove horrendously difficult the way every other _measure_ is 1 2 1 3 and the others keep changing, until I noticed that other symmetry in the Ruler Function--that it repeats itself exactly from the beginning every time it reaches a virgin power, i.e. a power for the first time.

I will show you how it might be done for a fairly difficult number, how one one might produce a *Result through Descent*.


Suppose we are asked to find F2! for (210+150)!

The first term is always easy. We know that contains 210-1 factors of 2. 

For the value 150, we will need to break it up again. We could break it up several times if we wanted to, but that is our choice. We could just take the value of the 150 through the familiar Floor Function, and add it to what we already have, or we could continue with the DeuceHound method, which of course we will do.

F2!{150}=F2!{128}+F2!{22}.

Again. it is easy for us to add up the 11 Ruler Sequence values for 22, but we prefer to decend further:

F2!{150}=F2!{128}+F2!{16}+F2!{4}+F2!{2}.

Now we have descended all the way to the bottom. What it suggests is that we could build the whole thing forward instead of backwards, in as tiny of increments as we felt like handling. We sum

1023+127+15+3+1=1169

Notice that these summands are all powers of 2 minus the quantity one.

Okay, that is it.

----------


## desiresjab

Now let us look at some more wonderments within the Ruler Sequence. Let us look at just the pure powers of 2 in the sequence. What do you suppose we would get if we added up all those powers to, say, 26, since we have been working with that as an example.

2+4+8+16+32=64.

64-1=F2!{26}.

The above is why the DeuceHound works.We could use this method, too, but we would still need the *DeuceHound (mod 2)* to caculate for values of Q.

----------


## YesNo

I think one way to do this is to take the number n represented in binary and then apply the rule to each bit that is set. I may get this ready tomorrow.

----------


## desiresjab

It could be done this way.

F2!{2k+Q}=

[k-11∑ [2n]]-1+ F2!{Q}.

----------


## desiresjab

> I think one way to do this is to take the number n represented in binary and then apply the rule to each bit that is set. I may get this ready tomorrow.


Yes, it is intimately related to binary. However, I do not know if that is the best way to go about it. Maybe though. I do not know Python or OOP. I would simply do it like my last post, if I were programming a computer to work it out, which may be precisely what you meant. I would use the *Descent Technique* showed earlier to calculate F2! for Q.

----------


## YesNo

I think it would be the same as what you are suggesting to do. The binary representation of the number picks out the powers of 2 in that number.

Python is an OOP language. There are classes and inheritance. You don't need to know much about it to make it work. I started using it when my wife was working on machine learning concepts. We built clusters and then created models. I know (next to) jack about how these things are done, but with the Python packages, such as, pandas, sympy, scikit-learn, and so on, one can get results very quickly with very little actually programming. The programming work has all been done in the packages and it has all been optimized by others. You don't have to program these things, only use what others have packaged to explore your ideas. You are giving me ideas on how to use other packages I have not explored before. It is more a matter of looking up what is available and using that to implement an idea. Also you get mathjax which allows you to format text.

I don't know when I will get a general implementation done. I need to look for the right packages that deal with bit maps or representations of numbers to different bases so as to get factors other than 2. I don't want to have to implement that myself because I will probably implement it wrong and it will not be as efficient.

----------


## desiresjab

> I think it would be the same as what you are suggesting to do. The binary representation of the number picks out the powers of 2 in that number.
> 
> Python is an OOP language. There are classes and inheritance. You don't need to know much about it to make it work. I started using it when my wife was working on machine learning concepts. We built clusters and then created models. I know (next to) jack about how these things are done, but with the Python packages, such as, pandas, sympy, scikit-learn, and so on, one can get results very quickly with very little actually programming. The programming work has all been done in the packages and it has all been optimized by others. You don't have to program these things, only use what others have packaged to explore your ideas. You are giving me ideas on how to use other packages I have not explored before. It is more a matter of looking up what is available and using that to implement an idea. Also you get mathjax which allows you to format text.
> 
> I don't know when I will get a general implementation done. I need to look for the right packages that deal with bit maps or representations of numbers to different bases so as to get factors other than 2. I don't want to have to implement that myself because I will probably implement it wrong and it will not be as efficient.


Your idea of filling bytes would work. You would actually be adding each member of the Ruler Sequence individually that way. One shorter way is to figure the nearest power of 2, then get Q with the method you suggest. That would cut down on a lot of the work. Perhaps you are just looking to empirically verify through a number of different examples. I am so confident I am resting now, watching movies. In a day or two I will be coming back to see if I can do anything about 3's. I am trying to get a good sense of all the figurate connections too.

----------


## YesNo

One way to speed up the process is to "vectorize" the number. That is represent each bit as an element in a vector and then perform component-wise replacement of the 1 with the desired number. I am still working on that, however, Python's numpy or pandas packages should provide a way to do that. For the moment I have updated the notebook with a function that will do arbitrary numbers using a for loop and numpy's binary_repr to split out the individual bits. It not not an efficient way to do that.

Next steps: (1) vectorize the solution and (2) include other bases besides binary.

https://drive.google.com/file/d/0B96...ew?usp=sharing

----------


## desiresjab

I have no familiarity with vectorizing components. Once January comes and I get a new system I will be doing some programming. All my programming will involve number functions, as I am not the least interested in programming my heater to go on and off et al. I hate learning a new language, because it is only a tool to get at what I want, but I will dive right in because I_ do_ want what I want.

Today I have to try and replace the side mirror on my car. An elk knocked it off as her head smashed into my windshield. If it had been a bull a horn would have killed me. It doesn't do much good to slow down because they will run right to your headlights anyway. You have to lay on the horn. Who can do that when they are swerving with their foot hard on the brake? These are the largest elk in the word--Roosevelt. I hope it did not become bear food. I was only going about 5mph when it hit me. I have lived in deer and elk country all my life and never hit one before now, though I have seen scores along the roads that others killed.

Well, it is not like you simply screw the mirror on from the outside. You have to take the door panel off. I could pay someone, but I refuse to do that on a job I _should_ be able to do myself.

Back to math. I am hesitant to start all the work on 3's, because I do not believe there is much future there. I need one formula that shakes every prime out of a number. Actually, I think there is a way to do that by making some adaptations to the DeuceHound. I think that is hard. I know my brain is going to suffer.

When I look at the Ruler Sequences for 2 and 3, the opening cycle on 2's is so short it makes it hard to write the common pattern. I know it is there and I can see it. Writing it down successfully will be like a Chinese water torture. But when that is done, it should be fairly routine to write code for the other odd primes. That is what I am hoping, anyway.

Knowing myself, It will likely be another few days before I start. I have to rev myself up to attack the hard ones. A fine treasure waits at the end of this rainbow, if I can just find the end. First, I have to make myself look for it.

----------


## desiresjab

I forsee another difficulty ahead. The DeuceHound requires a number it can convert to decimal. The only way it knows to handle an exponential number is to convert it to decimal first. This would make it a wonderful machine for what today are considered fairly small numbers like billions, trillions or quadrillions, and it is especially designed to handle factorials. But what about numbers that are so large there is no possibility of the whole thing sitting inside a computer at once? I think this is the magnitude of number the calculational number theory guys are working on these days. I believe they have a way of breaking these numbers up, just the way tasks are broken into many modules on some of the large, shared research projects such as searching for the latest champ of primes, or even the Serpenski project you are contributing to (is that shared?), and delivering these modules to home computers across the world to work on while their owners surf the web, or something like that.

Theoretically, the DeuceHound (mod ∞) would still be a useful accomplishement, I guess, because it can handle powers factorialized, of a restricted kind (powers of primes). Otherwise, it can handle large decimal numbers factorializrd. 400,000,000,000!, the current ceiling for Brocard solutions, would be do-able, I believe, on the DeuceHound. A large power of 2 might not be nearby, but a large power of _some prime_ might lie extremely close to 400000000000!, once the machine is (mod ∞). Finding that would be an extra problem. I expect to devise a better way eventually, i.e. to attack any number regardless of the form it is found in. That is a ways ahead.

The problem I forsee with the attempt to add bits is that every _measure_ is not the same. If you just had to add 7 or 8 for each measure, that would be easy, but the values keep growing. Of course, we know what the values will be for even longer stretches than single measures. Fancy formulas or no, it may come down to adding up these stretches systematically until one reaches the desired last summand, anyway, as you have proposed. I simply do not know yet. I am still amazed that I got the *DeuceHound (mod 2)* to work correctly. I am not used to success.

----------


## YesNo

I do have one old computer running a Sierpinski sieve for PrimeGrid a distributive computing platform.

That you have broken this up into sums should allow it to be done piecemeal. I added another implementation of the DeuceHound. It only works for 2. Unfortunately it isn't vectorized. I also haven't done any performance tests on it, but I do have a few "old_ways" procedures coded for comparison and testing. https://drive.google.com/file/d/0B96...ew?usp=sharing

----------


## desiresjab

> I do have one old computer running a Sierpinski sieve for PrimeGrid a distributive computing platform.
> 
> That you have broken this up into sums should allow it to be done piecemeal. I added another implementation of the DeuceHound. It only works for 2. Unfortunately it isn't vectorized. I also haven't done any performance tests on it, but I do have a few "old_ways" procedures coded for comparison and testing. https://drive.google.com/file/d/0B96...ew?usp=sharing


I am quite impressed with your DeuceHound implementation. You understand. I believe what you are calling piecemeal I called the Descent Method--where you got the triangular shape. Remainders are treated the same way as any number. That method never says, "_Okay, thirty-one is small enough, I will just add up the Ruler Sequence from there_." No sir, that is what a human would do. The DeuceHound finds the smallest power under a remainder and goes to work again, a perfect slave to its method, and perfectly accurate.

Sometimes figuring out an explicit formula for something is harder than devising an algorithm. I have not tried the explicit formula yet, but I will next. Right now I want to show you the rough schematic for the *DeuceHound (mod ∞)*. It is what the formula will say and the algorithm do. It is not exactly a flowchart either. Well. See what you think. Where I am saying "print", in the outline, I just mean add. You get the idea. That will be described in the formula. Here I want the flow of the logic. Did I get it?

* * * * *

We can use the technique of adding bits to conquer 2, because it happens to be 2, and computer bits are binary. Only a general formula for all primes might conquer primes other than 2, however.

So we need a technique that will write (add) each bit for any prime. If a power of x is factorialized:

1 Write (x-1)...1's, then the next integer. This loop will be used on every print command throughout the algorithm.

2 Do that entire process x-1 times, then on the xth time write 1...x-1 times then jump to the next integer, which for the prime 2 (our example) simply means to now write down 1 3, since we had already done our process x-1 times the first time we did it. With the prime 3, we would have had to have written 1 1 2..1 1 2..1 1 3. We would write that x-1 times before we came to a 1 1 4 in the Ruler Sequence for the prime 3. But here 2 is our example, because from it we must get the pattern. For 3 we wrote down 1 1 2 ....(x-1) times (twice), before jumping to the next integer 3. Like this 1 1 2..1 1 2..1 1 3..1 1 2..1 1 2..1 1 4....Before we get to 5, we have to do exactly what we just did, except for the last number, which increases by one.

3 We are back with 2 as our example. Write down everything you have again, except jump to the next integer on the last entry, which means write down the 1 2 1 3 you already have, except for the last entry, where 4 appears for the first time, like so: 1 2 1 3..1 2 1 4

4 Can you guess what to do next? Once more write down everything you have written down, but jump to the next integer on the last entry. 

5 When you reach the correct power, add all your entries up.

The above would take care of values which are pure powers of any prime. We are using 2 as our prime in this exegesis, just as a familiar example. For values of Q, if Q were small, we would choose a simple algorithm that merely walked up to that value without any recourse to powers, perhaps. If Q were still uncomfortably large, we would instead use the method of descent with that prime, which I showed before for 2's.

6 Add the value for Q to the sum you already have, and that sum is the power of x for that factorial.

So the infinity model of the DeuceHound would then pull all factors of x exclusively from numbers of the type xk!+Q, with the goal of later enlarging its scope to track down the factors of any number regardless of the form it is presented in, whether this involves finding the nearest power of x to the value one is dealing with, or some other technique later to be discovered. 

* * * * *

In a nutshell: it adds 1...x-1 times, then adds the next integer. Now it must loop back and add exactly what it just added to the total except for the last number, before advancing to add the next integer, as in a virgin power. You know how Q is handled. So that's it. In the old Procedural platform I could have done it. There would have been a lot of ghastly plumbing and loops, for sure. I understand the OOP is much cleaner, easier, faster.

The DeuceHound may need a trademark and patent or copyright before long. 


* * * * *

I do not think anyone else reads this thread. I am amazed you have followed the reasoning in detail. If it is not too personal a question, may I ask your profession before you became an old man like mysef? I am officially retired, but I have spent a total of at least thirty thousand hours in a chair or standing, doing each of the following to get by: playing poker, playing and teaching guitar; and doing the following for fun or ambition probably an equal ampunt of time: reading prose and poetry, writing prose and poetry, and doing amateur mathematics. My intersts continually pass the torch around to each other. Right now math is dominating again, sometime in the future it will be one of the other interests that dominates for a good spell. Since my interests dominate my life, they are about all I do besides kiss my family a lot, eat and sleep and watch movies on my computer. When a subject dominates me it really does, and I let it gladly. There is no resistance or regret.

* * * * *

What I want to gaze at right now is a number line with powers of numbers highlighted, the first power will not be highlighted, because that would mean highlighting every number. I know what I want, but it is hard to describe. I want to get a sense of how close any power comes to another. I do not know of a rule or a law for that. All numbers are on the line, and first powers are merely black as usual, 2nd powers and beyond would be red. I need the ability to look at long stretches far out the number line to see how powers interact. I did compare powers on 2 and 3 for a ways, and after a while they are not particularly neighborly, sometimes with thousands between a power on 2 and the nearest value that was a power on 3. I have a thinking there must have been powers of other numbers that were closer than that, which fell between those big gaps. I am trying to get a sense of how large we can expect Q to be when powers of all numbers are involved way, way out the nuumber line.

----------


## YesNo

The piecemeal method is like the descent method, a kind of looping or recursive or fractal or mathematical induction process. An explicit formula would be something like n(n+1)/2 for the nth triangular number. The piecemeal formula just starts adding from 1 to n. However, all the iterating can take time after a while.

With OOP one could create a class and build a set of methods supporting that class inheriting standard methods from the parent class. I didn't have to create a class and methods. I just used what Python methods were available in classes already built. However, I was thinking of building a class for a Sierpinski covering, if I ever get to that.

For the other primes there must be more involved than the base used to represent the number n, but maybe it is not as complicated as it looks at the moment.

As far as my profession, what I do could be called software engineering, computer science, data science, or database development. I have degrees in mathematics, but I do not work as an academic and I don't publish papers.

----------


## desiresjab

Degrees in math? No wonder you can follow this stuff. I was beginning to get suspicious. Only a technically trained person could or even would follow this thread in such detail. Vectorizing components was a pretty good clue. Maybe you should be writing the thread and I should be the one responding.

Anyway, I am still after the formula for *DeuceHound (mod ∞)*. It is in my brain and will not straighten itself out yet. Perhaps a sigma with double indices might take care of the main routine. As for double sigmas, I truthfully must look up how they are used again. I used to know and forgot if it is simple nesting or something else.

----------


## desiresjab

Looking at the Ruler Sequence for 3, I worked out the explicit formula for the Tre-Tracker mode of the *DeuceHound (mod ∞)*: 

F3(3K!)=3k-1.
................2

Sorry for all the dots. They were the only way the 2 would line up where it needs to be.

Will the formula for every odd prime be as simple as this and furthermore be its mimic, establishing a general formula? It seems intuitive that this should be, yet there may be a difference for powers of 4n+1 and 4n+3 primes factorialized, for instance. There may be two separate formulas.

----------


## YesNo

So, it is (3k-1)/2? I will check that later today. 

When you get your new computer we could exchange jupyter notebooks. We can format using mathjax.

----------


## desiresjab

I do not know anything about Jupyter and Python right now. For a long time I have desired Mathematica. It seems like it might be the ultimate in math software for home users and is not as expensive anymore. There are other packages available from different vendors. I have not had time to study them either. What do you know about all these math packages? I suppose some of them are good for a ways and then show glaring limitations. 

We will definitely exchange some notes via internet channels once we can.

I believe I am close to a formula for all odd primes. The formula for 3 is so compact it still might be useful. Beautiful mathematical objects usually fit somewhere. The general formula may be an ugly girl you avoid when the pretty one is at hand.

When one inputs his prime p and power k, the computer checks whether p is 2 or is odd. If 2, the computer performs the by now familiar *DeuceHound (mod 2)* algorithm. If odd, it makes sure the input p is a prime. Then it does its thing with its alternative algorithm for odd primes.

I think I can get there within a day or two, but I might be fooling myself. First I may have to figure out the rule for what looks to be quite a cranky sequence I have come across.

Math man, my experience has been it is pretty easy to see and apply the rule of certain kinds of sequences, yet often a heavy challenge to find the formula that exactly describes that easy rule. The Fiobinacci sequence is easy to apply, but finding the formula that gives you any element by address in the sequence did not look particularly easy to me when my eyes hastily traveled over it, with √5 mysteriously appearing and all. I believe it is the same thing with the General Ruler Sequence--applying it is easy, but finding and correctly formulating the general rule requires the brain to do heavy reps. Maybe it is just my brain. More experienced mathematicians would probably struggle far less with these same concepts--which might even be described elsewhere. But investigating them seems like a personal job. I feel it is important to know the _number mechanics_ of what is going on at all levels as thoroughly as you can, the way we investigated Quadratic Reciprocity, when one researches mathematical objects. I do not mind getting there without using a life line if I am able. The challenge is what I am into. Doing what it takes to construct such a function will enrich me along the way.

The _2 machine_ of the DeuceHound works. I also think the name works. Now we need its _odd prime machine_ to work as handily. Once the formula is cracked, programming it _might_ be easier than cracking the formula. I really do not know. I know in programming you get to work with algorithms. Instead of talking so much I should be trancing, or I have a thinking I will never get the plans for the DeuceHound's main engine drawn.

----------


## YesNo

Regarding the closed form of Fibonnacci numbers, I don't know how to derive it, but then I would just look it up and try to figure it out from there. Here is a source you might already be familiar with: http://www.maths.surrey.ac.uk/hosted...ibFormula.html

I like to use math.stackexchange. Here is something about that closed form there: http://math.stackexchange.com/questi...onacci-numbers It is a good place to get practice asking questions and using mathjax which is rather simple to use once you do it a few times.

One of the problems with Mathematica is that it is proprietary software. Not only does that mean that it costs money but I don't think the code implementing whatever you are using can be seen by you. I prefer open source software for mathematics. Python is free and you can download enough modules to get started through Anaconda's distribution. You can also add your own modules. Also, there are thousands of packages available which means a lot of people are using the language.

----------


## desiresjab

What I mean to say is, the pattern for both 3 and 5 are easy to define recursively, since each value depends on the last, and that one on the last before it. In the case of 3, an explicit formula was easy to find. 5 seems more stubborn, or different, or maybe I am having _number blindness_. At 7 I have not looked yet. It may be that 4n+1 types and 4n+3 types _are_ different in this context, which makes sense as it is about powers, and that is why 5 will not be nice. Unfortunately, the next 4n+1 prime to test this hypothesis on is 13. The actual values for these tests grow huge fast and therefore harder for a human to spot a pattern or constant by eye. The DeuceHound may end up with three distinct main engines.

----------


## desiresjab

Jimeney Christmas! I got it!

When you trod the long way around, reasoning everything out for yourself, it takes a while. Which is the long way of saying I know I have seen this formula before. Whether it was used specifically for this or for some other purpose, I do not know (beautiful mathematical objects often have diverse applicability), but one does not forget such beauty even when there is no understanding of it.

Fp(pk!)=

pk-1
p-1

And this beauty even works for 2. I would have seen it earlier had I listened to my own instincts about the explicit formula for 3 being a template for all the other odd primes (little did I know it would work for the even prime, as well), but as usual I had to bull around in the china shop for a while before seeing the truth.

Well, the *DeuceHound (mod ∞)* is about ready to go. I suppose it has been for a couple of centuries. Sometimes it is so nice being ignorant, because playing discoverer is a great way to learn. It is infinitely better than finding the formula in a book and trying to figure out why it is so. I know why this one is so.

----------


## Dreamwoven

Congratulations, desiresjab!

----------


## desiresjab

Thanks, Dreamwoven. I only managed to labor my way into what has to be a well known fact after much sweat. That is the kind of nifty minor object Fermat was knocking out regularly 350 years ago. I am amazed I was able to find it. If the general case for 3 had not been pretty easy to spot, I am likely to have missed it altogether. Now I am really curious what else that formula might be used for. Just because I saw it before does not mean it was used for this. Something like that can be hard to find. Is there a formula hound on the internet?

----------


## YesNo

Here is an update pdf of the notebook: https://drive.google.com/file/d/0B96...ew?usp=sharing

This passes the initial test for 2 and 3, but not for 5 and 7. It might be a problem with the way I coded it.

----------


## desiresjab

> Here is an update pdf of the notebook: https://drive.google.com/file/d/0B96...ew?usp=sharing
> 
> This passes the initial test for 2 and 3, but not for 5 and 7. It might be a problem with the way I coded it.


I don't know what could be wrong. The programming language is so compact I cannot see anything. The explicit formula should give the value of the sequence up to a certain value for n for one prime. Your program appears to want the sum of all primes up to a certain n. It will do that individually to the primes. Adding the numbers of all the primes together will not give you anything, if that is what I am seeing. Am I looking at the program incorrectly? I just got up. I need time to wake up and examine it more closely.

* * * * *

Now we have the ability to sum the Ruler Sequence for any prime. That is with each prime having the separate metric of itself. I guess what we need for Brocard is one metric to judge all primes against. That is what we do not have. We need the abilty to see how the ordered primes are stuffed into a factorial, how many together we need to meet the conditions of 2n(2n+2), if any amount together can.

If we had a way to judge all primes just from the value of the formula for one prime,like 2, then we would have somethibng that might assist with Brocard. We would not have to change the value of p each time we wanted the facts on another prime.

Meanwhile, I have not been able to even find the expression 2p-1/p-1 anywhere else again. It will turn up. It will not be as handy as some equation like d/r=t, used for everything from baking cookies to electronics, but will have quite a few disparate uses, I am wagering.

----------


## YesNo

I think I got the description of your algorithm wrong. It isn't the sum over the primes but the sum over the powers of a particular prime dividing n. I'll try to get a correction tomorrow.

Edit: In looking it over it seems I implemented the wrong algorithm.

Edit: I updated the pdf file with what I think is the correct algorithm, but I am still getting discrepancies when I try n = 95 for primes 5 and 7. I printed out intermediate results. https://drive.google.com/file/d/0B96...ew?usp=sharing

----------


## desiresjab

95 is not a power of 5 or 7, which means you will have some value of Q tagging along. The nearest power of 5 is 125, giving us the opportunity to use subtraction on Q for the first time rather than addition and illustrate another symmetry of the Ruler Sequence. The measures on either side of a virgin power are symmetrical, meaning that measure for meausre the sequence will look the same on either die of the virgin power until, of course, the measure containing the next virgin. This means that even when we are subtracting Q instead of adding it, all we have to do is go to the beginning of the Ruler Sequence and add those values up to and including Q, then subtract them from Fp(5k!). No nasty backwards subtractions in the sequence. We can calculate -Q precisely as if we are calculating Q. An example below with p as 5 and k as 3, goes as follows when I simply add up Q by sight rather than instituting the repetitive breakdown on Q as a computer would do and a complete formula designate (which we call the Descent or Reduction method):

F5(95!)=F5(125!)-F5(30!)=

[(53-1)/4]-7=24

If that is not right, cancel the party hats while there is time, lads!

* * * * *

By a common metric for all primes I mean this: plug in the value 2, for instance, for p, and something else for k, and the DeuceHound spits back the Fp for any prime of your choosing up to the value 2k+Q. 2 might be the right metric, too, because it is unique among primes and has the same relationship to every prime (mod 2). Now, that kind of formula is something we are not even close to yet, do not know is frankly possible, and doubt our toolkit in the search for such a master key, none of which shall stop us but only give us pause to whine a little before we proceed. We can acquire or invent new tools and perspectives where they are missing. Our toolbox is light for those hikes far out the number line.

----------


## YesNo

If we allow subtraction, then we will need a rule to tell when to use subtraction. I think the number of factors of 5 in 95! is 22. It was part of the result of running old_way(95) in the jupyter notebook. Of course, I might have programmed that wrong. 

I get a remainder of 45299530029296875 when I run 95! mod 524. That should be 0 if there are 24 factors of 5 in 95!

----------


## desiresjab

Subtraction is no different from addition. To subtract, you just add. One could, in fact, change the negative sign in front of Q and obtain the same results, which is the whole idea. I am not sure what might be wrong with the program. Let me physically illustrate that the number of factors of 5 in and below 53 is 31, and in and below 30 it is 7 factors of 5.

5.10.15 2025 30
1..1..1..1..2....1..1..1..1..2....1..1..1..1..2... .1..1..1..1..2....1..1..1..1..3

Sorry the multiples of 5 are kind of crowded above the sequence. As you can see, counting backwards from 3 remains the same as counting forward from 1 all the way to just before 3. That is why we can start at the beginning of the Ruler Sequence when subtracting instead of counting backwards from 3. Same thing.

The first 3 in the sequence takes us up to 53. The sixth element in the sequence takes us up to 30. The value of the sequence (added together ) up to 30 is 7. Subtract that from 31 and we have 24, the answer. Notice we could have counted backwards from 53 for the same answer. This means we could also have counted forward six elements from 3 to obtain an identical answer, as well. Hope that helps.

----------


## desiresjab

One of my beliefs is that mathematics will continue to play a significant role in our comprehension as we go about unfolding the universe for ourselves and exposing deeper and deeper levels of its reality. No one can predict what future mathematics will look like any more than Newton or Fermat could have told you squat about vectors, matrices, topology or complex analysis.

Will this future mathematics consist to a large degree in making headway in classifying and analyzing the set of _transcendental numbers_? And I have to ask myself, _why not_? We know almost nothing about them, yet the few we do have cognizance of are of extraordinary importance in the grappling contest with nature. Pi and e are primary to our understanding of nature. Besides those two, which were both finally _proven_ to be transcendental after much effort from great mathematicians, there are a host of other important constants waiting in the wings to be proven transcendental or not. Almost surely every one of them is, but each proof for any individual number suspected to be transcendental is titantically difficult.

How many more important constants await our discovery which are the lynchpins of phenomena we have barely or yet begun to study, like consciousness and quantum reality?

There are more transcendentals in the interval between (1, 2) than there are rational numbers in the universe. In fact, there are more transcendentals in the interval between (1, 1000001/1000000) than there are natural numbers. Make the interval as small as you want, there will always be more transcendental numbers in that interval than there are fractions in the universe.

Why would one not wonder if many great secrets lie unsuspected within this set? It is more numerous than all other sets combined, and all other sets have yielded up great truths about reality.

The unapproachability of this set in general is where the difficulty lies. Liouville finally managed to very cleverly construct some artificial transcendentals. As far as I know, this particular class of transcendental has not proved useful outside of mathematics, though it still could I suppose. No one ever said _every_ transcendental might be useful. In fact, there is no such thing as every transcendental. Their numerosity cannot be counted, or even called numerosity, or even classified beyond Cantor's description of them as having the suspected power of the continuum. Cantor does not know if any set lies between the "countable," infinities and the uncountable continuum or not.

This why I cannot help but feel great paradigm shifts lie ahead which will directly correspond to our increasing understanding of this set of numbers, after the appropriate human lag in time to understand what mathematicians have discovered. This lag is always part of our reality. Leibniz was three centuries ahead of digital computers, but understood already the basic concept of such computing machines. Boole was a century and a half ahead of his time; Archimedes played around with or very near to the fundamental ideas of calculus. Then there was Liouville who managed to artificially construct a transcendental number never seen before. How many centuries will it be until someone goes through the door Liouville could only open a tiny crack?

----------


## YesNo

I wonder how Liouville constructed that artificial transcendental number. He would have to make sure it could not be the root of a polynomial with rational coefficients. Here is one paper that looks promising but I haven't read it: http://deanlm.com/transcendental/con...tal_number.pdf

----------


## desiresjab

> I wonder how Liouville constructed that artificial transcendental number. He would have to make sure it could not be the root of a polynomial with rational coefficients. Here is one paper that looks promising but I haven't read it: http://deanlm.com/transcendental/con...tal_number.pdf


I have not read the link yet either. Here is how I have read Liouville constructed his number. In every digital place of his number he put a 0, unless that digital place was the value of a factorial. He put 1's in the 1rst, 2nd, 6th and 120th digital places, etc. His number then looked like this: 

11000100....1000000000...00000001... ... 

....................↑ 24th digital place .......↑... 120th digital place.

He had to undertake to prove that this was transcendental. This may be intuitively clear, but I cannot quite make it out. More likely it is a very involved monster to prove his number is a tranny. But wait, he only constructed this number in the first place because he knew beforehand it would be a tranny. He had a concept and carried it out, then. If that was intuitively clear to him, then it must be possible for the same concept to be intuitively clear to us, I would think. We need to reverse engineer it. I believe that is what he must have done, once he realized the concept mentally.

Those occasional 1's mean a remainder of 1 when the Liouville number is divided by powers of 10. Any partial representation of the Liouville number ending in a 1 and divided by powers of 10 smaller than itself would also leave a remainder of 1. Don't ask me what this means. I only see it, I don't know how it fits into his proof. I would say he was a clever man.

----------


## YesNo

That construction would guarantee it is not a rational number, since the digits do not repeat, but how to show it is not algebraic?

One can come up with infinitely many transcendental numbers by us of the Gelfond-Schneider theorem. If a and b are algebraic numbers with a not equal to 0 or 1 and b not a rational number, then ab is transcendental.: http://sprott.physics.wisc.edu/pickoveR/trans.html I don't know how that theorem was proven either.

----------


## desiresjab

> I wonder how Liouville constructed that artificial transcendental number. He would have to make sure it could not be the root of a polynomial with rational coefficients. Here is one paper that looks promising but I haven't read it: http://deanlm.com/transcendental/con...tal_number.pdf


After having read the link, I see they point out what intutively makes this number irrational--its repetend never repeats. Repetends and the pattern of digits recurring after a decimal point are essentially the same thing. That makes it at least an irrational. The rest of the proof proves that it is of the transcendental variety of irrational number. Besides a little set theory notation, the proof mainly involves algebraic integers and several bounding theorems from calculus.

----------


## desiresjab

> That construction would guarantee it is not a rational number, since the digits do not repeat, but how to show it is not algebraic?
> 
> One can come up with infinitely many transcendental numbers by us of the Gelfond-Schneider theorem. If a and b are algebraic numbers with a not equal to 0 or 1 and b not a rational number, then ab is transcendental.: http://sprott.physics.wisc.edu/pickoveR/trans.html I don't know how that theorem was proven either.


I have not read this link yet. But I have seen this result before, and I must say it always surprised me that the class jump could be made so easily. Somewhere down in the the mechanical process of multiplication which a power is, the numbers are doing exactly what they must when a strictly algebraic irrational is used as an exponent on another algebraic number whether it be rational or not (is the way I took it). These number mechanics I believe we can only see from a higher level of abstraction, we cannot see the clicking of individual numbers in the process and what turns the result into a transcendental, the way such a vision of number mechanics might enable a complete mechanical understanding QR (notice I say _might_).

That might mean learning the theory of algebraic integers better. In number theory this is where polynomial equations are used to perform the operations of arithmetic upon each other instead of regular numbers doing it to regular numbers. If they are algebraic integers, then I believe (but don't know) they obey all the laws of integers in some form. We should remember this result and stay aware of it whenever we encounter the term.

I believe I remember reading that algebraic integers even have their own version of prime numbers. These may be more than something trivial like √(x+1)=7. I do not know a great deal about the theory of algebraic integers--big ol' polynomials you can treat just like integers in your calculations, is how I think of them.

----------


## desiresjab

Things which are likely shoo-ins still must be formally proven in mathematics before they can be accepted. One smiles to see that we are free to assume 2π and 2π are transcendental, but we cannot assume ππ is.

----------


## desiresjab

That weird symbol that did not turn out well in the last post is pi.

----------


## YesNo

I was looking more at Liouville numbers. This Wikipedia link seems to contain the basic information along with a proof that these numbers are transcendental: https://en.wikipedia.org/wiki/Liouville_number

The proof depends on the concept of the "irrationality measure" of a number which is a measure of how close the number can be approximated by rational numbers. Rational numbers have an irrationality measure of 1. Basically, they are not irrational. Algebraic numbers that are not rational have irrationality measure of 2. Transcendental numbers have an irrationality measure of 2 or greater. Liouville transcendental numbers have infinite irrationality measure. They are kind of extreme as far as this measure goes. It looks like they were constructed to make sure they were as far away from being algebraic as possible which allowed them to be more easily proven to be transcendental.

The algebraic numbers are roots of polynomials with integer coefficients. They include the rationals which are roots of linear polynomials f(x) = rx + s where r and s are integers. Algebraic "integers", a subset of algebraic numbers are roots of such polynomials where the coefficient of the highest power of x is 1, that is, they are roots of "monic" polynomials, such as, x - 7 would have the root 7, an integer.

----------


## desiresjab

> I was looking more at Liouville numbers. This Wikipedia link seems to contain the basic information along with a proof that these numbers are transcendental: https://en.wikipedia.org/wiki/Liouville_number
> 
> The proof depends on the concept of the "irrationality measure" of a number which is a measure of how close the number can be approximated by rational numbers. Rational numbers have an irrationality measure of 1. Basically, they are not irrational. Algebraic numbers that are not rational have irrationality measure of 2. Transcendental numbers have an irrationality measure of 2 or greater. Liouville transcendental numbers have infinite irrationality measure. They are kind of extreme as far as this measure goes. It looks like they were constructed to make sure they were as far away from being algebraic as possible which allowed them to be more easily proven to be transcendental.
> 
> The algebraic numbers are roots of polynomials with integer coefficients. They include the rationals which are roots of linear polynomials f(x) = rx + s where r and s are integers. Algebraic "integers", a subset of algebraic numbers are roots of such polynomials where the coefficient of the highest power of x is 1, that is, they are roots of "monic" polynomials, such as, x - 7 would have the root 7, an integer.


I thought Liouville numbers could be _more_ closely approximated by rational rumbers than pi and e. It looks like I could get pretty close with Liouville numbers. Just those occasional pesky 1's are in the way of the exact value.

----------


## desiresjab

I do not know what to look into next. I hate being stuck between projects worse than being stuck _on_ a project. When this happens it is best to go study until a relevant project suggests itself. It helps to have a Brocard or suspected Brocard connection.

----------


## desiresjab

Time to go to school, folks, for me, that is. I write my own lessons, gleaned from various sources for ideas. I hope you will audit the class. My knowledge of number theory is a patchwork rather than a logical procession up the heirarchy of concepts. I continually find it necessary to backtrack and learn things I missed in my initial excitement to forge ahead. I usually also find it necessary to revisit sites of conflict more than once before concepts sink in. Often, I am satisfied for the moment to glean an important idea of the concept that was easier than I thought it would be, whereupon I will once again set out for amateur waters where actual work (as in absorbing concepts fully) is done in a leisurely fashion more in accordance with sloth. The current topic is such an area of Congruence theory. I have a mediocre understanding of it, now it is time for a complete understanding.

The main idea of the Chinese Remainder Theorem is not hard to understand. It has an analogy in normal algebra—solving a system of equations. Here we solve a system of congruences. This means you will have different modulii. The task is find one modulus that works for everything and solve for n. This turns out to be a delightfully easy concept—you just multiply the various modulii together for the common modulus. The restriction on this is the modulii have to be relatively prime pairwise. This means any two you choose will be relatively prime.

Now, solving a setup problem does not mean one can see everything a theorem implies. Far from it. Learning to see the relvance of basic number theoretic functions in live situations is the more important part of learning them, and comes with experience. After we see how to solve a basic setup problem in this field, we will take a look at something the theorem implies which would have been quite, quite hard to forsee. One would really have to have an instinct for numbers to see this out of the blocks. First, a typical setup type problem.

In a system of linear congruences I could choose my numbers congruent in each modulus at random, and the theorem still guarantees solutions in integers.

* * * * *

Problem:

Find a number n such that when divided by 3 leaves a remainder of 2, when by 5 leaves a remainder of 1, and when divided by 7 leaves a remainder of 1.

This implies that

35n≡70 (mod 140)
28n≡28 (mod 140)
20n≡20 (mod 140)

n would equal 21n-20n, right? It so happens we can set this up. We have 

3(35n-28n)=21n

3(70-28)-20=106 (mod 140)=

210-84-20≡106 (mod 140)=

Any number in the same residue class as 106 (mod 140) is a valid solution.

* * * * *

Okay, very cool. Now, what does such a theorem imply that we might not necessarily see right away? What animal would think it means we can solve the following?

Problem:

Can one find one million consecutive integers that are not square free?

----------


## YesNo

> I thought Liouville numbers could be _more_ closely approximated by rational rumbers than pi and e. It looks like I could get pretty close with Liouville numbers. Just those occasional pesky 1's are in the way of the exact value.


Yes, that is how I see it as well, but the idea of "close" is different. If we pick a real number, x, and an arbitrarily small but greater than zero value, epsilon, then there are infinitely many rational numbers (p/q where p and q are integers) close to that x no matter what x we pick. That is |x - p/q| < epsilon for infinitely many values p/q.

So that idea of closeness isn't going to help differentiate rational from algebraic or transcendental numbers since there are infinitely many rationals close to any real number. 

One way out of the problem is to let epsilon vary depending on the rational number, p/q. If we replace the constant epsilon with a function of the rational number we could get something like |x - p/q| < f(p/q) = 1/q. This would allow epsilon to vary, but it is still not adequate. There are still infinitely many rational numbers close to any real number, x. 

One way to tighten the function is to raise q to some power. If we replace f(p/q) = 1/q1 with f(p/q,u) = 1/qu then if u > 1, according to Wikipedia, I don't quite see it yet, only finitely many rationals could approach any given rational number. If u > 2 then we can say that only finitely many rationals approach even irrational algebraic numbers. If that is the case, then we could use this to distinguish between rationals and irrationals and between irrational algebraic and transcendental numbers. We could define a function of x that gives the precise u value back for which the change occurs between having infinitely many rationals approximate x using this new idea of closeness to having only finitely many rationals approximate it.

----------


## desiresjab

> Yes, that is how I see it as well, but the idea of "close" is different. If we pick a real number, x, and an arbitrarily small but greater than zero value, epsilon, then there are infinitely many rational numbers (p/q where p and q are integers) close to that x no matter what x we pick. That is |x - p/q| < epsilon for infinitely many values p/q.
> 
> So that idea of closeness isn't going to help differentiate rational from algebraic or transcendental numbers since there are infinitely many rationals close to any real number. 
> 
> One way out of the problem is to let epsilon vary depending on the rational number, p/q. If we replace the constant epsilon with a function of the rational number we could get something like |x - p/q| < f(p/q) = 1/q. This would allow epsilon to vary, but it is still not adequate. There are still infinitely many rational numbers close to any real number, x. 
> 
> One way to tighten the function is to raise q to some power. If we replace f(p/q) = 1/q1 with f(p/q,u) = 1/qu then if u > 1, according to Wikipedia, I don't quite see it yet, only finitely many rationals could approach any given rational number. If u > 2 then we can say that only finitely many rationals approach even irrational algebraic numbers. If that is the case, then we could use this to distinguish between rationals and irrationals and between irrational algebraic and transcendental numbers. We could define a function of x that gives the precise u value back for which the change occurs between having infinitely many rationals approximate x using this new idea of closeness to having only finitely many rationals approximate it.


That will take a lot of thought to reason out. I read your post only once because I am tired. I only partially understood. The idea of a number only being approached by finitely many rationals is foreign to me, which is not bad but hard.

----------


## YesNo

All numbers have infinitely many rationals within any interval around them. What makes the set finite is that there is an extra constraint on the rational numbers, p/q. Not only must they be close, they must also pass the condition that |x - p/q| < 1/qu where u gets larger than 1. Come to think of it if the Liouville numbers have u arbitrarily large then they will always have infinitely many rational numbers approximating them and fulfilling this new condition. They are an extreme form of transcendental number.

----------


## desiresjab

I have to be gone for a few days again. I will be thinking about the Chinese Remainder theorem. There is a less painful way to do it. I am trying to understand the precise logic behind that method. Once you get more than three modulii to work with it is real hard to enact the method I showed earlier, because it requires brain-twisting logic. The new method is more straightforward though a little longer. I will understand it before I present it. Understanding of this method would bring us well along on our goal to a complete comprehension of the CRT. It does not enable us to see how far the influence of the theorem spreads or how many situations that look diverse can be handled by it, but it will be a good start.

----------


## YesNo

Regarding those Liouville numbers, it occurred to me that a way to describe these numbers is to say that _they are numbers that can be approximated by a sequence of rational numbers whose denominators are positive, but very small._ 

The Wikipedia article says: _A Liouville number can thus be approximated "quite closely" by a sequence of rational numbers._ https://en.wikipedia.org/w/index.php...ouville_number

The metaphor "quite closely" is misleading. One should always be able to find a sequence of rational numbers that approximates the Liouville number even closer than the sequence used to verify that the number is a Liouville number. The only problem with that closer sequence is the denominators of those rational numbers used in that closer sequence would likely be larger than those in the sequence of rational numbers used to show that the number was a Liouville number.

----------


## desiresjab

I have learned the secret of the Chinese Remainder theorem. The key involves mod inverses, which are hardly ever used in the proof, I believe. Do not have time now to explain it to our throngs of readers, but will do so when I return from my travels in about three days.

Learning these basic number theoretic functions inside out is another key to number theory. One cannot know them just so-so. Inside out, so that when one is applicable, you are sure to see it instead of putting in a lot of wasred effort. There is no other way.

----------


## YesNo

> Problem:
> 
> Can one find one million consecutive integers that are not square free?


So each of these million consecutive integers must have at least one prime to the second power?

----------


## desiresjab

> So each of these million consecutive integers must have at least one prime to the second power?


Yes, no repeated factors in a square free prime factorization.

I am back, but too beat to do a good exegesis on the Chinese Remainder theorem, or even think of it now. To only half see it would be utter failure, and I think I half see it, so I will be at rest here until I go on. I have allowed myself to be confused. Of course I cannot allow that. Ahem!

On the matter of transcendentals, my mind wanders far into the future to wonder if any theory can exist to locate important ones. So far we have just _run into_ numbers like pi and e, or discovered them in a sense out of data. In calculus ex is the number that has itself for derivative, and pi the ratio of the circumference of a circle to its diameter. The Feigenbaum constant was discovered after iterations in chaos theory were noticed to converge. Might there be a seive invented by advanced minds of the future to strain out useful transcendentals, rather than having to discover each one in action? Maybe the question is crazy. I have no idea what the system of constraints would be, but the constraints of the Liouville number might be an echo of that theory. Not a scientific observation, I know, just futuristic musing.

----------


## YesNo

If no repeated factors are allowed, then we could have only a sequence of three consecutive integers since 4 = 22 divides one of every four consecutive integers.

I was thinking about the DeuceHound. I think the general idea of using only n to find all the prime factors of n! is solvable. That is, one does not have to construct n! and then factor it to get the prime factorization. One can get that from working with n itself. Here is a video on the topic that explains one technique: https://www.youtube.com/watch?v=HkAKM2lfvAA The problem with this technique is that it uses an iterative approach by looping through all the powers of a prime in n rather than a closed form to get the number of factors of a prime in n!.

----------


## desiresjab

> If no repeated factors are allowed, then we could have only a sequence of three consecutive integers since 4 = 22 divides one of every four consecutive integers.
> 
> I was thinking about the DeuceHound. I think the general idea of using only n to find all the prime factors of n! is solvable. That is, one does not have to construct n! and then factor it to get the prime factorization. One can get that from working with n itself. Here is a video on the topic that explains one technique: https://www.youtube.com/watch?v=HkAKM2lfvAA The problem with this technique is that it uses an iterative approach by looping through all the powers of a prime in n rather than a closed form to get the number of factors of a prime in n!.


I have not read the link yet. I am excited about doing so. Lying in bed a few moments ago I was thinking about the DeuceHound, too. It can already take its place among valid and useful number theoretic functions, claiming its own identity because it is based directly on the Ruler Function. The Achilles heel of the DeuceHound and of Legendre's floor function method is their inability to _find_ primes. They are able to manipulate what they are told are primes, they do not find these primes themselves. Any machine for finding primes _and_ manipulating them would necessarily be vast, since finding and testing for primes is the really difficult part. As long as it is told which numbers are prime, the DeuceHound will do fine. 

Storing a list of primes in the computer seems quite crude to me. But that is where the human race is on this job.

----------


## desiresjab

I looked at the video in the link and the three that followed it. No surprises. He is using the Floor Function algorithm. It is fast and general. The Deucehound is even faster and easier where pure powers of a prime are factorialized, because it is an explicit formula in those cases and Q is equal to zero. Powers factorialized are something that might be found in problems involving the natural sciences. In case Q does not equal zero, we know how to append the value of Q to our total.

With a non-zero value for Q, is the DeuceHound as fast as the Floor method? It is quite close, I think, and has the advantage of an explicit formula for cases where pure powers are factorialized. The DeuceHound has definite similarities to the Floor Function, but is not exactly the same thing, since the Floor Function is not based on the Ruler Function. One is certainly reminded of it as one figures the value of Q, especially.

----------


## desiresjab

I am ready to wrap up the series on the Chinese Remainder theorem. We would be stunned if such a theorem did not exist, it is such a natural consequence. If x divided by p leaveas a remainder of a, and x divided by q leaves a remainder of b, it is hardly surprising that x divided by ab will also leave some unique remainder as well, is it now? That is the simplicity of the theorem in plain English. As ususal, the situation in mathematical notation is more difficult but more precise. But anyone should keep in mind the plain English interpretation of the Chinese Remainder theorem above when studying its mathematical details.

We already looked at a method that sometimes works easily for a system of exactly three modular equations. Later in this post or the next post we will look at the most general method for solving systems of congruences. 

The method below is extremely simple for two modulii, which we now digress a moment to cover. 

x≡2 (mod 3)
x≡4 (mod 5)

5x≡10 (mod 15)
3x≡12 (mod 15)

5x-10=3x-12. Now simply solve for x.

5x+2=3x

2x=-2  

x=-1=14 

Indeed 14≡2 (mod 3) and 14≡4 (mod 5).

This method is so easy, I recommend it whenever there are only two equations in the system. 

* * * * *

For those cases when we have three or more modulii, we want to show and explain the method that is at the heart of matters and will always work. 

x≡a1b1M+.....arbrM(mod M)
...........m1...........mr

Is the general description.

M is all the modulii multipled together. M divided by any mi is M without that mi. The b's are the inverses (mod mi) of 

M
mi

The big question is _why do we want these inverses, how did they get involved_?

You might say they are involved because mathematicians wanted them involved. In order to isolate the various ai's so they can be added (mod M), it is necessary get rid of the terms around them. This is accomplished by multiplying

M
mi by its inverse (mod mi)

* * * * *

Let's do a classic example where I will flat make up the numbers. It involves an old lady from the village riding her bicycle to town with a large bag of eggs to sell who is knocked over. All but 160 of her eggs are broken. The culprit, a clumsy but honest mathematician, offers to reimburse her on the spot for damages. The old lady however, being an odd sort, does not remember how many eggs she had, but she does remember a few other details. When she counted them by 3's, there was one left over; when she counted them by 4's, there were three left over, when she counted them by 5's, there were two left over; when she counted them by 7's, there were five left over. At ten cents per egg, how much did the poor mathematician have to pay the old lady? 

*Step1*

x≡1 (mod 3)
x≡3 (mod 4)
x≡2 (mod 5)
x≡5 (mod 7)


Step 2

140x≡140 (mod 420)
105x≡315 (mod 420)
84x≡168 (mod 420)
60x≡300 (mod 420).

In the first column we have divided 420 by each modulus, and in the second column multipled that times the value of x its original modulus. We must now find the inverses of 140, 315, 168 and 300 in their original modulii. 

*Step 3*

(140)-1 (mod 3)=2, 

(315)-1 (mod 4)=3

(168)-1 (mod 5)=2

(300)-1 (mod 7)=6


*Step 4*

Multiply each Mi by both its original value (ai) under the old modulus, and its mod inverse under the old modulus. This indeed must isolate the ai's so they can be added (mod M).

(140)(1)(2)+(315)(3)(3)+(168)(2)(2)+(300)(5)(6)=

280+2835+672+9000=12787

12787≡187 (mod 420).

160 of her original 187 eggs are unbroken. That means the clumsy professor broke only 27 eggs. At a dime apiece, he owes the old gal a mere $2.70.

* * * * * 

This method will always work. We may not be quite through with the theorem, though, for I have found some other articles that even explain it graphically, which I intend to look at.

The important thing, of course, is that we make ourselves able to recognize when particular number theoretic functions are relevant and useful in situations encountered _in the wild_. Without this ability we are only able to answer prepared questions that are carefully worded to let us know which function we should be thinking about.

----------


## desiresjab

Speaking of those little important consequences of theorems which conceal themselves in so many places, I just learned one concerning Fermat's Little Theorem, which, had I known it earlier would have greatly aided my efforts at an original proof of the theorem could I have found a clever, non-circuitous way of proving the egg first without the chicken. Anyway, I think it is quite important and a good illustration of why one must be ever watchful for consequences of the theorems one learns, if one ever hopes to become a master of numbers in the wild. This simple overlooked or at least under appreciated fact is that the theorem offers an alternative way to compute mod inverses when a Є *Zp, for a-1 can be computed as ap-2, since we have a·ap-2=1 by the theorem. Yes, circuituous, but an important fact nonetheless, whether or not it can be used constructively in a valid, original proof of Fermat's Little Theorem.

Now I feel compelled to go back and look at my own work with this new restatement in hand. It is possible I even noticed this before but felt I could not use it _precisely because_ I felt the reasoning would be circuitous--like using a word in its own definition. I do not particularly need to, it is more like a drive to complete something I started and was unable to satisfactorily finish. Practice at proving is one aspect of math I could always use work on. Any function one already knows and can make the connection between is fair game for use in a proof of any theorem, including this old one. My way was a visual demonstration followed by an attempt at algebraic proof. If anything interesting comes of my revisit, I will report.

----------


## YesNo

> If x divided by p leaveas a remainder of a, and x divided by q leaves a remainder of b, it is hardly surprising that x divided by ab will also leave some unique remainder as well, is it now?


That seems to be a good way to look at it with p and q prime. I didn't quite see it that way before.

----------


## desiresjab

> That seems to be a good way to look at it with p and q prime. I didn't quite see it that way before.


Thanks. I want to unravel each function in this way. What you quoted is what I now consider the nutshell truth about the CRT. Unfortunately, that understanding alone does not allow one manipulative powers in CRT, it just clears the mind of junk associated with wondering what is going on.

Nor does going from the separate modulii to the combined one mean going backwards is necessarily easy, though I have seen it called trivial. It will take practice, like maneuvers in any enterprise require practice and familiarity. CRT and other functions of number theory have to become muscle memory. Computing backwards in the CRT will be the subject of my next post. It has to be--since I have a hunch I will at first be confused.

This is a central function of number theory, because it is certainly a central theorem of modular arithmetic, and modular arithmetic is central to number theory. People who know what they are doing use it routinely in various applications. There are encryption systems based on it. On computers, calculations involving very large numbers are routinely broken down into smaller units via the CRT and the result obtained using the separate modulii, before converting back to mod pq.

I say all this to justify my staying with the function a while longer, for unless I see _plumb through it_, I have not seen enough, though the effort is tedious and exhausting for me, as well. Yet, it is the way I always preferred to study math. No way school courses go slow enough for my tastes. There are too many places I want to stop for a month and investigate the way we investigate here, but that luxury is not practical, so Monday they are on to a new topic. Everything is a rush job.

We will turn the CRT inside out like a straitjacket, put it on and take it off unaided at will, as casually as we remove our daycoat upon entering the house. I ask, _is there a lesser level of familiarity that would prove satisfactory with a function used constantly in multifarious applications in one's chosen field of study_? Unless one can see these applications when they arise in the wild, one is tapping the cane of the blind; and unless one can successfully implement central functions in those situations, one remains in the wild.

----------


## YesNo

What I like to do to improve familiarity is to program the algorithms, to the extent there are algorithms.

----------


## desiresjab

> What I like to do to improve familiarity is to program the algorithms, to the extent there are algorithms.


Yes, that is a beautiful way to get familiar. Plus, it has fringe benefits. 

Time for the next phase. How much information do we need to work backwards fruitfully? Suppose we are given the naked result of the last problem. What can we do with this result?

x≡187 (mod 420)

x≡ ? (mod 3)
x≡ ? (mod 4)
x≡ ? (mod 5)
x≡ ? (mod 7). These values are easy to fill in.

x≡ 1 (mod 3)
x≡ 3 (mod 4)
x≡ 2 (mod 5)
x≡ 5 (mod 7).

Nothing difficult about that. It appears the only difficulty lies in working the CRT forward, not backwards. Backwards is, as they say, trivial. 

Some good news at last!

----------


## YesNo

I agree that going backwards is trivial except it does require being able to factor the larger modulus into a different set of primes.

----------


## desiresjab

> I agree that going backwards is trivial except it does require being able to factor the larger modulus into a different set of primes.


Now it is time for a simple application. 

*Multiply 41 times 43* via the CRT. 

Of course we ignore the fact that we know this is equal to 422-1. At some point a highly observant mathematician would notice that the product he was trying to reconstruct (providing he had been given only the system below and not the two numbers to be multiplied)he or she would notice that the two numbers were two apart anyway. This information would do him no particular good, I suppose. He would merely realize he was working out some T2-1, while the solution to the system was extracted as usual using CRT.

41≡*1* (mod 5) 43≡*3* (mod 5)
41≡*6* (mod 7) 43≡*1* (mod 7)
41≡*5* (mod 9) 43≡*7* (mod 9)
41≡*8* (mod 11) 43≡*10* (mod 11)


In typical alternate notation: (1, 3), (6, 1), (5, 7), (8, 10)

What we have to do is multiply the quantities I have bolded above.

1·3≡3 (mod 5), 6·1≡6 (mod 7), 5·7≡8 (mod 9), 8·10≡3 (mod 11). That is,

x≡3 (mod 5), x≡6 (mod 7), x≡8 (mod 9), x≡3 (mod 11).

Now all that remains is to solve the system.

3(693)-1 (mod 5)+ 6(495)-1 (mod 7)+ 8(385)-1 (mod 9)+3(315)-1 (mod 11)=

3(693)(2)+6(495)(3)+ 8(385)(4)+3(315)(8)

6(693)+18(495)+32(385)+24(315)≡*1763 (mod 3465)*.

It is hard to believe, after what I had to go through to get this simple product, that this method could actually be shorter and easier in some context. Perhaps mark it down to my half dim understanding of byte mechanics in computers, I suppose. We see that it does work, though, and now we understand the mechanics of multiplication via the CRT.

We had to use numbers far larger than the eventual product to get our product. What is this? How could this be easier for a computer? For a really huge number in this method, I would need a commensurately huge number of terms in the addition. This is crazy. I have to be missing something. Am I?

----------


## desiresjab

Notice that in the last problem I chose the modulii to be used in the CRT multiplicaion. I chose 5, 7, 9, 11. None of these have any factors in common with 41 or 43 which are both prime, by design. But suppose I had chosen to multiply two composite numbers such as 88 and 63. It would have been possible to choose some of the modulii that were not relatively prime to 88 or 63 or both, and this is completely okay to do. In fact, I suspect it could be one way of shortening computer run time, since in those terms where the modulii share a factor with a multiplier 0 will be a constant out front for that term in the addition. One could perhaps tailor the modulii so that few computations were needed because most dropped off to zero due to this choice.

----------


## desiresjab

That wraps it up for now on the Chinese Remainder Theorem. We know how to solve a system of congruences and we know how to multiply using the CRT. I would not know where to look for more understanding. I think that will have to be encountered in the wild, where we must be ever alert for opportunities to deploy it. This goes for all the basic functions and algorithms of number theory such as

1 The Chinese Remainder Theorem
2 The Euler Phi function
3 Quadratic Reciprocity
4 The DeuceHound Ruler Function
5 Divisor and sum of divisors functions
6 Euclidian algorithm
7 Fermat's little theorem (generalized for composites as well)
8 Wilson's Theorem
9 Discrete Logarithms
10 Primite Roots
11 Modular power series
12 Properties of Congruences

This list can only grow. However, mastery of the above is a good idea for any serious student of number theory.

----------


## desiresjab

I have an idea for a graphically oriented computer program. Very simple--a number line stretching to the far reaches of cardinality. Marked on the line are all powers greater than 1 of every integer. As the viewer travels farther out the line at an adjustable speed, he is able to stop the progress where any two powers are relatively close together and examine the actual numbers on the line, then resume rushing outwatrd again. Quite simple. One could have a background of stars and eerie music suggestive of infinity. My own investigations (limited by thirty-two bit precision) indicate that the powers grow ever farther apart as we proceed outward, despite there being more of them the farther out we go. I do not have any sources and I do not know exactly how to approach the problem of calculating the distance between powers. Perhaps there is a rule. Some approximate formula would probably be the hope, if there was any hope, I mean to say.

----------


## desiresjab

Here is a short link on the subject. One is reminded of the formulas for irrationality measure, and that is even brought up in one of the posts. Of course this is only concerned with powers of 2 and 3, where my question more generally involves powers of all integers at once.

http://mathoverflow.net/questions/11...nd-powers-of-3

----------


## YesNo

The irrationality measure reminds me the Liouville questions we discussed earlier.

----------


## desiresjab

> The irrationality measure reminds me the Liouville questions we discussed earlier.


That is what I meant. It reminded me of the same thing. Which is something I definitely have to get back to. But first things first. Why did I not already know that iteration of Euler's ф function eventually produces a power of 2, and from there on out power's of 2 exclusively? 

I can see why powers of 2 do it, once it gets that far, but I do not immediately see why iterations of any number always devolve to a power of 2. I was just messing around with a ф calculator online. I put in a really huge number of the variety100...0001, looking for a big prime to begin with. I did not find one, but ф was usually about half the value of the iterated N, it seemed, once the function got cranking for a few iterations, and eventually it was always half. I do not have any information about how many steps it takes for this to occur. It seems like something that would have been figured out already. A little thought might reveal the answer intuitively. I hate to think too hard. What do you say about this?

----------


## desiresjab

Well, scratch that observation about the ф function. It was wrong. I think I am glad. Amateurs are often led astray and given to much ado about nothing, until a little more investigation reveals their haste.

----------


## desiresjab

I hate to keep correcting myself. I think I got sleepy and confused one investigative result with another. I *do* believe repeated iteration of the Euler ф function eventually yields a pure power of 2 in its chain for any whole number whatsoever, and of course from there on out, all powers of 2. It only takes one counter example.

----------


## desiresjab

Actually, it occurs to me now that if one proved the ф function was always even (excluding the very last iteration), that would prove the iterations must eventually descend to a power of 2. What makes me curious is that the function values seem to descend to a pure power of 2 long before they _have to_ in the sense I just described. Many large arguments arrive at a pure power of 2 on a quite large one. 1000000, on the other hand, did not arrive until 4096.

----------


## YesNo

I think the phi function should be even for n > 2 based on this article: https://en.wikipedia.org/wiki/Euler's_totient_function

The function is multiplicative so we only need to consider its value for prime powers but phi of a prime power, pn, is equal to pn-1(p - 1). The p - 1 makes it even for odd prime p.

----------


## desiresjab

Good. I hadn't seen yet that its being a multiplicative function allowed us to ignore all but prime powers, though I usually suspect something like that in similar situations. We could just break it down as a bijection between *Z/pq and *Z/p *x* *Z/q, I suppose. Does a bijection make it both homomorphic and isomorphic?

----------


## YesNo

The bijection https://en.wikipedia.org/wiki/Bijection is a one-to-one and onto mapping between the elements of two sets. One can use a bijection to show that the two sets have the same number of elements. The only requirement is that the elements be all used once and only once between the two sets. 

If one talks about various homomorphisms https://en.wikipedia.org/wiki/Homomorphism then there is an algebraic structure that the map preserves when the elements are mapped from one set to the other. That is, not any mapping between the elements of the two sets will do.

----------


## desiresjab

> The bijection https://en.wikipedia.org/wiki/Bijection is a one-to-one and onto mapping between the elements of two sets. One can use a bijection to show that the two sets have the same number of elements. The only requirement is that the elements be all used once and only once between the two sets. 
> 
> If one talks about various homomorphisms https://en.wikipedia.org/wiki/Homomorphism then there is an algebraic structure that the map preserves when the elements are mapped from one set to the other. That is, not any mapping between the elements of the two sets will do.


Trying to get my morhisms straight. The character of isomorphisms seems to change a liitle from field to field, and homomorhisms are not the same thing as homeomorhisms. 

But the precise meanings of bijections, injections and surjections are super stable, from what I can tell. Finite rings, a familiar stomping ground, are surjective, bijective and injective at the same time. Who could forget that? So that matter is settled as far as finite rings are concerned.

I am looking around for something to look at. I may investigate that phi function phenomenon and see if I can find any rule to how fast an argument under iteration of the function devolves to a pure power of 2. Hopefully, the reason will be easy to spot, or it could turn into a prolonged investigation. That is not so bad either. There are always connections to be spotted between these major functions.

----------


## YesNo

As I see it, homeomorphisms preserve closeness rather than an algebraic structure: https://en.wikipedia.org/wiki/Homeomorphism

I suspect if one has a bijective homomorphism or a bijective homeomorphism then the inverse mapping exists. Not only is the algebra or topological structure preserved but also the number of elements involved. One should be able to go backwards having that. It has been a while since I thought about these ideas. I imagine most mappings that don't preserve anything between the two sets.

----------


## desiresjab

Earlier this year I went through an abstract algebra course on YouTube. I did it crazy fast--like thirty-eight lectures in three or four days. I was only able to retain some parts, but it was a good introduction to the language. They never did give a proof of quadratric reciprocity though they were hitting all around it. It amazrd me that complicated proofs and other undertakings were dealt with in a few sweeps of the chalk in that language. I guess this is because so much information is already contained and assumed in the forms they are using. One knows the engine from looking at the schematic of it rather than getting one's hands greasy down among the gears, was my impression of abstract algebra. I will probably slip in some of the language now and then. Mainly, as you know, I like greasy hands in mathematics. The reason abstract mathematics exists is because the smartest people were no longer able to see through the gears and wires of the engine to work on what had to be worked on. They knew there was structure there. If they couldn't, then I cannot either, but I am trying to be sure of where they stopped even as I try to learn the higher ways of schematics--bijections and isomorphisms et al. I want to have the same relationship with numbers they had before they made the transfer. For instance, Ramanujan took a good look at Brocard's problem. What did he see that made him decide he could not solve this one? None of us will ever see what Ramanujan saw, but we might somehow reach the same conclusion for a similar reason. 

The only person I believe to have had as much talent for numbers is Gauss, who had the formal training such a mind needs from an early age. It seems reasonable to assume that Ramanujan might have changed the world of mathematics as vastly as Gauss did, had he been born under more propitious circumstances that afforded him an early start under expert guidance. These minds only require nudging in the right direction here and there. Euler began his higher education in mathematics at about thirteen when he went to study with the Bernoullis. One cannot say Gauss would have contributed more had he been given the opportunity to study with somone of Bernoulli caliber earlier on. The people instructing Gauss in his primary years were merely good not great, from all accounts. The duke of Brunswick had taken note of the young prodigy and made sure he was in a place of learning. There was a student or student-teacher six or seven years Gauss's senior interested in mathematics that the prodigy consorted with. This was perfectly enough. One can be quite assured that Gauss was equipped to solve any problem any mathematician up to his to his time and fifty years beyond ever solved, even those that were solved during his liftetime by others who were also great matheticians. What Abel and Galois did in proving there was no general method for solving equations of 5th degree and beyond, was a discovery for the ages, and it happened during Gauss's prime. Why didn't Gauss make that discovery?

One has only to look at how full the plate of Gauss was, at what he got done, to forgive him for leaving this major discovery out of his fireplace mantle collection. He was busy at that precise moment calculating in his head the orbit of Ceres from a few degrees of arc he had been given. It was an open problem in the world of mathematics, and one that had to be solved fast. Many were scurrying to calculate an orbit so the discovery of a new planetoid would not be lost. Luckily, there was already a computer in the world in 1822. When the time came only one set of calculations was correct. Ceres remergred from behind the sun where and when Gauss said it would. This was not the first time he had amazed the world, but this cemented his reputation for all time. While he was calculating the orbit in his head hre invented a new tool to assist him that we now call the Method of Least Squares. This is a tool now in universal use. Even Gauss could only work on so many things at a time. _Few but Ripe_, was his motto, remember. It is not true that he worked on only a few problems, but it is true that he brought most of them to ripeness that he did engage with. His mind was perhaps not superior to that of Euler but his method was cleaner and superior, I believe. Euler chopped down more individual trees than anyone. Gauss cleared forests and usually built a ranch on the spot. Euler did not build nearly as many ranches. That, I believe, is why Gauss is regarded a little higher, not only by myselgf but by mathematicians in general. None of this to detract from the legacy of Euler, who is one of my idols, but merely to point out that the talent of Gauss has probably not been seen in the world again except for perhaps Ramanujan, who had a pitiful start yet still made mighty contributions.

Euler was able to calculate to fifty decimal places in his head when he needed to. Gauss's solution was different--he simply memorized a book of logarithm tables in a day or two and the problem was taken care of. Now he could hold up for examination in his head any logarithms he wished to compare, and when he needed a logarithm it was right there at his disposal.

It is obvious I am too spent to discuss math right now, or I would be writing math instead of writing about mathematicans. It is good recreation. I will try again later to look at math.

----------


## YesNo

I have a hard time remembering five digits of pi. Memorizing a table of logarithms is probably out of the question for me. I have heard of people with synaesthesia who can see numbers as colors or shapes.

----------


## desiresjab

> I have a hard time remembering five digits of pi. Memorizing a table of logarithms is probably out of the question for me. I have heard of people with synaesthesia who can see numbers as colors or shapes.


That guy from England, Daniel Tamet (not sure of spelling) has that synesthetic power, we might as well call it. Damage or deformation of the copus callosum which may have repaired itself the best way it could, seems to have something to do with the function of seeing numbers as colored shapes or smelling the notes of a flute concerto like a flower show.

I do not know if any synesthete ever turned his or her _dysfunction_ into a mighty contribution in any of the arts or sciences. Interesting question. In which field would success be most likely? I suppose there are different degrees of the_ dysfunction_, from light to full blown. Tesla might be candidate. Not sure. In the old days this ability is something that geniuses might have very intelligently hidden, for fear of being chained to a post while hags ran for kindling. Goethe had an unusual mind that grasped science differently than the standard model of the time. Could he have been one?

----------


## YesNo

They might have all been synaesthetic, but that may not be the correct word for it. Intuitive might be better. Or they were able to communicate better with their goddesses. Synaesthesia may be labelled a dysfunction but it appears to be just not normal functioning. I don't know if there is any brain correlates for it nor if there is anything wrong with it.

----------


## desiresjab

> They might have all been synaesthetic, but that may not be the correct word for it. Intuitive might be better. Or they were able to communicate better with their goddesses. Synaesthesia may be labelled a dysfunction but it appears to be just not normal functioning. I don't know if there is any brain correlates for it nor if there is anything wrong with it.


That is why I italicized _dysfunction_.

----------


## desiresjab

Just for fun, and to do for factorials what is done for powers, consider the list below.

1!-0!=0
2!-1!=1
3!-2!=4
4!-3!=18
5!-4!=96
6!-5!=600

The recursive formula for a division of two consecutive factorials would be n!=(n+1)!/n+1. Very easy.

A formula for a subtaction of two consecutive factorials n!-(n-1)!, must look like this:

n!-n!/n, and after manipulation like

n!(n-1)/n= (n-1)(n-1)!=

n!-(n-1)!=*(n-1)2(n-2)!*. We also like the following from above because it is a factorial times a simple ratio, fast to work and succinct:

*n!(n-1)/n*.

* * * * *

To respond to my own question about why repeated iteration of the φ function on a number eventually reduces to a pure power of 2 before it has to by virtue of merely being even, after staring long at the Wiki-peja article on the function, one finds this statement, which not only proves that it does happen but shows why and when as well, if one contemplates:

φ(n) is even for n>2. Moreover, if n has r distinct odd prime factors, then 2r|φ(n). The vertical bar means “divides.”

* * * * *

Of the unsloved problems involving the φ function, Lehmer's conjecture looks captivating.

----------


## YesNo

> To respond to my own question about why repeated iteration of the φ function on a number eventually reduces to a pure power of 2 before it has to by virtue of merely being even, after staring long at the Wiki-peja article on the function, one finds this statement, which not only proves that it does happen but shows why and when as well, if one contemplates:
> 
> φ(n) is even for n>2. Moreover, if n has r distinct odd prime factors, then 2r|φ(n). The vertical bar means “divides.”
> 
> * * * * *
> 
> Of the unsloved problems involving the φ function, Lehmer's conjecture looks captivating.


Lehmer's totient problem sounds interesting. It is the first I heard of it: https://en.wikipedia.org/wiki/Lehmer's_totient_problem I would be happy to understand how someone got as far as they did with that conjecture.

I think it makes sense that if there are r distinct odd composite primes dividing n then 2r divides the totient of n. More factors of 2 than r may divide it as well. I don't know what it means for a number to eventually reduce to a pure power of 2 before it has to by virtue of merely being even. One would have to find a measure for that concept, but one might exist.

----------


## YesNo

Regard Lehmer's conjecture, I can see why any such n so that φ(n) | n - 1 must be a Carmichael number. The order of any prime p dividing n would divide φ(n) and so divide n - 1 implying that p[sup]n-1[\sup] = 1 (mod n): http://mathworld.wolfram.com/LehmersTotientProblem.html

What about the other results? 

(1) Why must it be square-free? (Edit: I can see why it must be square-free. If pr divides n then pr-1(p-1) divides n - 1. So p divides both n and n - 1 if r > 1. That implies p divides 1. So it must be square-free.)

(2) Why must it have at least 7 (or 11 or 14) distinct primes dividing it?

(3) Why must it be greater than 1023?

(4) Why must it be odd? (Edit: I can see why it must be odd. If n were even then n - 1 is odd. By assumption φ(n) | n - 1, but φ(n) is even for n > 2. Considering the two cases for n <= 2: For n = 2, n is prime and since the conjecture only applies to composite n, n = 2 does not count. Since 1 is not a composite number either, n = 1 is not covered by the conjecture either.)

Here's another reference: http://math.stackexchange.com/questi...otient-problem

(5) Why are there no counterexamples of the form k2k + 1 (Here's the paper: http://citeseerx.ist.psu.edu/viewdoc...=rep1&type=pdf)

(6) Why are there no Fibonacci numbers as counterexamples?

(7) Is the following solution to the problem correct or not? https://www.youtube.com/watch?v=swbZqPjrcGk (Edit: This doesn't convince me. We can skip the powers of b since we know n must be square-free. So let composite n = bC where b is prime then if φ(n) = n - 1, we have φ(bC) = φ(b)φ(C) = n - 1 = bC - 1. So φ(C) = (bC - 1)/(b - 1). I don't see why that could not reduce to an integer. It would be an integer if C = 1 (more generally C = bk), but then n would be prime or not be square-free.)

----------


## YesNo

The more I am thinking of William Bouris' proof that Lehman numbers do not exist, the more I think there might be something to it. The basic idea is that φ(C) = (bC - 1)/(b - 1) cannot be an integer unless C = bk for some integer k which would imply that n is prime or n is not square-free. If that can be shown then there are no composite n = bC with b prime. That would be the same as saying bC is not congruent to 1 mod (b - 1). At any rate, this would be another way to look for a counterexample if one actually existed.

----------


## desiresjab

> Lehmer's totient problem sounds interesting. It is the first I heard of it: https://en.wikipedia.org/wiki/Lehmer's_totient_problem I would be happy to understand how someone got as far as they did with that conjecture.
> 
> I think it makes sense that if there are r distinct odd composite primes dividing n then 2r divides the totient of n. More factors of 2 than r may divide it as well. I don't know what it means for a number to eventually reduce to a pure power of 2 before it has to by virtue of merely being even. One would have to find a measure for that concept, but one might exist.


Since I saw that all numbers did eventually reduce to a power of 2, the iterated function must necessarily reach a power of 2 merely by virtue of the function always being even. Why? There is no escape for the iterated φ function. We already know certain facts, do we not? 

φ(φ(18))=2
φ(17)=16
φ(16)=8
φ(15)=8
φ(φ(13))=4
φ(12) above
φ(φ(11))=4
φ(10) above
φ(φ(9))=2
φ(8)=4
φ(φ(7))=2
φ(6) above
φ(5)=4, etc.

We already know these values and more for small values. A number descending from above in a function doomed always to spit back as output an even result, and whose last value is known to be 1, must eventually come to rest at 2, since that is the only argument for which the function will output 1. So it is quite forced for that reason, if for no other, and it turns out there _is_ another reason, which returns pure powers of 2 as function values long before the hand of the function is forced to produce a power of 2 simply because its next value must be even.

----------


## YesNo

What I don't understand is what it means to "reduce to a pure factor of 2". The totient could only have one factor of 2 in it. Let n be a prime of the form 4m+3 to get a totient with only one factor of 2 in it.

----------


## YesNo

> Regard Lehmer's conjecture, I can see why any such n so that φ(n) | n - 1 must be a Carmichael number. The order of any prime p dividing n would divide φ(n) and so divide n - 1 implying that p[sup]n-1[\sup] = 1 (mod n): http://mathworld.wolfram.com/LehmersTotientProblem.html
> 
> What about the other results? 
> 
> (1) Why must it be square-free? (Edit: I can see why it must be square-free. If pr divides n then pr-1(p-1) divides n - 1. So p divides both n and n - 1 if r > 1. That implies p divides 1. So it must be square-free.)
> 
> (2) Why must it have at least 7 (or 11 or 14) distinct primes dividing it?
> 
> (3) Why must it be greater than 1023?
> ...


I have skimmed through Lehmer's original paper, "On Euler's Totient Function": https://projecteuclid.org/download/p...ams/1183496203

There are many results in this paper. Bouris does not appear to have proved his result and the technique used of showing that a ratio cannot be an integer is used in Lehmer's paper. So I will skip (7).

----------


## desiresjab

> What I don't understand is what it means to "reduce to a pure factor of 2". The totient could only have one factor of 2 in it. Let n be a prime of the form 4m+3 to get a totient with only one factor of 2 in it.


Let me do a better job of being clear. Sorry for the confusion.

_Iterates_ to a pure power of 2.

----------


## desiresjab

> I have skimmed through Lehmer's original paper, "On Euler's Totient Function": https://projecteuclid.org/download/p...ams/1183496203
> 
> There are many results in this paper. Bouris does not appear to have proved his result and the technique used of showing that a ratio cannot be an integer is used in Lehmer's paper. So I will skip (7).


I looked at the Bouris paper. One pass was really not enough, slow as I am. Almost every time he states that a proposition "assumes" something (at least fifteen times, it seemed) it would be necessary for me to think long and hard to verify his contention. I did not feel it was worth it, especially after glancing at the YouTube side menu where it seemed Bouris might have had other proofs of many famous propositions. He is obviously more than a crackpot, but I cannot go about verifying or unverifying every proof someone claims to have made of a famous proposition. The fact that he made this one in language I understand means I _could_ follow it out, if I felt it was worth a prodigious effort. It was good mental exercise. To follow every detail completely would be too much exercise. A year from now I may look at a proof like this and follow it easily if past is precedent. I envy you if you can. But for now I will rely on your opinion of Bouris and marshall my strength for whatever takes me. I hear Liouville calling, yet I don't know. I also hear triangles calling, Euler calling, theories of categories and forms calling, class numbers calling...The great part about being retired and a math butterfly is that where I go is usually based on inspiration or the need to fulfill other inspirations.

I look at all links provided. Sometimes I have already read it. I often re-read things many times. If I find something I like, I will stay with it for days until I have drained it as well as I can.

One could spend forever looking at the connections of the Euler ф function--it is that centrally placed. One could find an involvement for it in practically any proposition. In short, what it does in number theory is stand in for the term p-1 in case of composite numbers. The rest of the time it _is_ p-1. Armed with this idea and a few cogent connections, one may be able to go big game hunting in the wild and have a reasonable chance of spotting the beast in camouflage.

----------


## YesNo

I don't think Bouris has a solution. When I was searching for information on the problem his paper kept popping up so I had to consider it. 

However, the Lehmer paper is worth reading. It contains the main ideas and proof techniques. 

I found an old book on continued fractions by C. D. Olds that I started reading. This should help build a foundation for Liouville numbers.

----------


## desiresjab

I looked at the Lehmer paper too. I do not claim to have understood every ounce, though stylistically it was so much cleaner and easier than the Bouris to follow. For me it was a confirmation of my beliefs concerning the importance of style in mathematics. It is impossible to always be clear for those with less understanding of a topic, but clarity carries great weight as far as it can go, especially for readers. Lehmer's style takes this into account, that of Bouris did not seem to, at least for me it did not. A number theorist who became a master at explaining abstract concepts was H.L. Davenport.

----------


## desiresjab

> I don't think Bouris has a solution. When I was searching for information on the problem his paper kept popping up so I had to consider it. 
> 
> However, the Lehmer paper is worth reading. It contains the main ideas and proof techniques. 
> 
> I found an old book on continued fractions by C. D. Olds that I started reading. This should help build a foundation for Liouville numbers.


Continued fractions are plumb scary!

----------


## desiresjab

In the meantime, though, I may be looking at decimal expansions, a topic clearly related to continured fractions.

----------


## desiresjab

I figured I would run into the DeuceHound formula, if I kept reading. See a close variant in the aritcle below used in a product. What is the formula for? The sum of the divisors of n, of course, usually denoted σ(n), proving I have not yet been around in the basic number theoretic functions as much as I need to be, or I would have recognized this. I will never forget it. The divisors functions (along with a few others), I have bascially ignored, but I have seen in the last few days that they are terribly well connected. 

https://mathlesstraveled.com/2007/11...umbers-part-i/

----------


## YesNo

I liked how he put the latex math symbols on that wordpress page.

----------


## desiresjab

> I liked how he put the latex math symbols on that wordpress page.


What?

----------


## YesNo

I didn't know Wordpress sites could format mathematics formulas the way that site formatted them.

----------


## desiresjab

> I didn't know Wordpress sites could format mathematics formulas the way that site formatted them.


I have never even heard of WordPress.

----------


## Dreamwoven

Wordpress have their own blogs, I have wordpress blogs and they allow public re-blogging of posts, which is very handy. See this: https://wordpress.org/news/

----------


## desiresjab

> Wordpress have their own blogs, I have wordpress blogs and they allow public re-blogging of posts, which is very handy. See this: https://wordpress.org/news/


How much do they want for it?

----------


## YesNo

You can set up a Wordpress blog for free. You can also get jupyter notebooks for free which lets you format using mathjax which I think is the same code. When I posted links to the jupyter notebooks before I was using that same code to generate those math symbols.

----------


## desiresjab

> You can set up a Wordpress blog for free. You can also get jupyter notebooks for free which lets you format using mathjax which I think is the same code. When I posted links to the jupyter notebooks before I was using that same code to generate those math symbols.


I can do almost everything with my OpenOffice word processor. The thing I cannot do is get subscripts and supercscripts to line up correctly when I want to use both on a sigma, for instance.

----------


## YesNo

What I use are Google docs and sheets for my personal documents. You just need a Google account to get that. It is all in the browser. You can also use mathjax with it after installing a plugin, but I use jupyter notebooks for mathematics with the underlying python kernel so I can calculate right in the notebook. I also use Google to back up all my photos on my phone as well as copy them to my computer (Windows 10).

I was reading more about Lehmer numbers. Any Lehmer number is also a Carmichael number. I can see why Carmichael numbers exist and Lehmer numbers probably don't. The Carmichael function lambda(n) is smaller than the totient, phi(n), and so it has a better chance of dividing n - 1. For example, the Carmichael number, 561 = 3*11*17, each of 2, 10 and 16 divide 560, but not their product.

----------


## desiresjab

Right now my head is fpinning in amazement at the fimple divifor functions, which fhow up in all kinds of not-fo-fimple places. They are involved in fome bigtime formulas by powerhoufe mathematicians and even have a clofe connection to the Reimann hypothefis.

We worked out σ1(pk) for ourfelves, and may have gotten to the more general formula if we had kept at it. The fimple functions are pure magic, but one fhould not be amazed at them, for they are there to be understood and are among the more underftandable objects in number theory. The mulitplicity of their connections ftill dazzle. But one can ftare at each one of them and fully underftand why there is a function there. We ftared fo long at σ1 that we know exactly how it works, we have taken the myftery out of it. I have realized I need to ftare now at what Wiki-peja calls σ0. I have not been working or thinking much becaufe I am coming out of a depreffion. Oh, by the way, of courfe one fees many obvious connections of thefe functions to the Euler phi function, which the article explores. The formulas are fuddenly no longer fimple, they look like ftuff Ramanujan himfelf would have worked on or produced in this field, and indeed he and Hardy were working in the immediate area. I will feel much ftronger once I tie up the divifior functions. I am impatient but ftill recovering, for I want to be off to the theory of lower bounds in logarithms. The more myftery I take out of thefe things the better I might feel about it when I have to die.

----------


## desiresjab

I have forgotten fome of the techniques for calculating limits. I fuppofe reviewing them, then, had ought to be a profitable venture before I look at the theory of lower bounds in logs. I remember there were fome functions you could not tell fimply by looking at whether they converged or not until fome proper manipulation had been done. It fhould take an hour to review what is proper. But when will I get to that hour, being as lazy as I am ambitious yet full of fchemes for learning? I tend to circle thefe propofitions flowly like a dog fizing up its rival, once I have them in my fights. Then I rufh in fnarling.

----------


## desiresjab

The proof in this link makes it cryftal clear why there can be a function for the number of divifors of a number. The inconvenience is we ftill have to break it down to prime factorization, the number's pretty face is not enough to give us the number of divifors it has. I knew thefe functions were fimple. Why did I ftay away fo long? Juft lazy or afraid. Cryftal clear. Aren't there only two important ones--the number of divifors and the fum of the divifors? It feems like there was another. Maybe I juft got my notations croffed up.

http://mathschallenge.net/library/nu...er_of_divisors

----------


## Danik 2016

Lol! Some "f" spam here.

----------


## YesNo

It does seem that the "s" becomes an "f" often, but I have had things like that happen with a faulty keyboard. Sometimes it is just my fingers.

I agree that multiplicative functions are convenient until factoring the number becomes difficult. In the case of that divisor count function one would need a full factorization to take advantage of it.

----------


## Dreamwoven

It gives a very nice lisp-like feel to the text!

----------


## Danik 2016

> It does seem that the "s" becomes an "f" often, but I have had things like that happen with a faulty keyboard. Sometimes it is just my fingers.
> 
> I agree that multiplicative functions are convenient until factoring the number becomes difficult. In the case of that divisor count function one would need a full factorization to take advantage of it.


I thought it might be a joke and reacted accordingly. But I think you are right. It must be a keyboard problem.

----------


## desiresjab

> I thought it might be a joke and reacted accordingly. But I think you are right. It must be a keyboard problem.


No, all I am doing is mimicking 17th century writers. Newton and some of the people he communicated with used f's in the places of s's sometimes, but only with rules. I never figured out why they did this. Maybe someone on here knows. I think the rule was at the beginning and in the middle of words but not at the end. It may just be something idosyncratic that developed. It was probably just an elongated s. I don't know the reason. No, folks, I have not flipped, unless you think I already was.

At least I found out who reads the thread. I already thought Danik and DW were.

----------


## desiresjab

Now for the simplest connection of all between ф and σ. 

ф(n)=σ0(n) where n is a prime. Makes perfect sense, right? The number of divisors, including itself, is equal to the number of numbers less than or equal to it which are _not_ relatively prime to it.

The number of divisors of a prime is just two, itself and 1. Two is the number of elements less than or equal to n which are _not_ relatively prime to it. The two functions are, then, identical in the case of a prime.

I just wanted to throw that in for the viewers. I think it is important and easy to remember, and knowing it might allow you to derive other related functions if there was a need in the wild. Number of divisors and number of numbers _not_ relatively prime to it, are the same thing for a prime. That is where ф will always equal σ0. 

By the way σ0 is the divisors of a number. Any divisor is worth 1 in the count of divisors. Whereas σ1 designates the sum of those same divisors.

----------


## desiresjab

The reason I am dwelling on the σ functions is I now believe they are the most critical functions of traditional number theory. Even the ф function is merely a type of shorthand for σ, it now seems to me. The ф function, which itself is extremely important is really just shorthand for manipulations of the divisor function. That makes the divisor functions incredibly important, when ф is just a special case of them!

Is this vision correct? I believe so, but perhaps I am missing something.

----------


## desiresjab

Mathematical formulas in books are indeed succinct and exact, but for that reason all the more difficult to see in their stark simplicty sometimes. Some mathematical propositions that could require great work to unravel without particular insights into this simplicity, are intuitively taken in at a glance with these insights under one's belt.

Let us examine the law that says if two numbers do indeed share a common factor, then that factor will also be a factor of their difference. If A and B share a factor that factor will also divide A-B.

Let us call n the common factor between A and B. Then pn=A and qn=B. But why the general language? Let's get specific. That is what we came to do.

A=6n, B=4n. Now isn't that better? Now there is no problem. Yes, we see easily that the common factor n still divides 6n-4n=2n, the difference of A and B.

When the general language is removed from mathematical propositions, it often serves intuitive clarity. This is the goal. This clarity is exactly what we want. I believe it cannot be possessed at the higher levels until all the elementary propositions are solid as rock in the mathematician's mind. This is why we linger, and not without profit.

----------


## desiresjab

Philosophical mood today. Thinking about the history of mathematics. If east injuns 1500 years ago were smart enough to devise and consider what we mistakenly today call Pell's equation (x2-ny2=1), then it would be foolish to believe they did not have possession of propositions in number theory which are more elementary. They did not leap to a proposition like Pell's equation from pure ignorance any more than we would be able to do so today without massive preparation and a solid foundation to build upon. In other words, they had possession of most of what we call elementary number theory.

Perhaps the ancient Injuns and Chinese did not yet have possession of abstract algebra and group theory; perhaps they had no theory of quadratic forms. Then again, we really do not know what some isolated individual might have acheived in his hut, those results being lost forever with his death. By the time European mathematcis began its heyday, Europeans had extreme advantages in communication that had been available to no others before them. Discovered knowledge was cabable of dissemination as never before. What one man discovered, many others had the chance to review and build upon, knowledge no longer had to die in a cave with its discoverer.

We only know of _some_ ancient societies that had this mathematics. We know for certain that many did not. In a pure white landscape it is easy to imagine why Eskimos would not develop even a counting system. North American injuns were mostly not advanced at all, but some of the southern injuns like Mayans and Inca were well advanced, and obviously had possession of some mathematics. This means to me that there were probably some smart Inca holed up somewhere doing number theory for curiosity and pleasure. Sub Saharan Africa seems to have had no advanced cultures. Along the Nile, Tigris and Euphrates there were numerous advanced civilizations. I believe we can assume that one mark of advanced societies is at least the mathematics of measurement and the beginnings of number theory.

----------


## desiresjab

My loquacity today seems to have no upward bound. I must be recovered..er.. I mean I am on the other side of the cycle now..ahem! Recovered, I said. 

A thought keeps nagging me. Despite the difficulty of number theory, we do not know much. By _we_ I mean _mankind_. It looks like we know a lot, but I am beginning to see it in such a way which means we do not actually know much. Now, we *do* know a lot of detail that depends on a few propositions that may be intertwined silently in our formulas making them valid wherever we take them. That these silent and critical propositions are so few in number at the base of our structure is what I mean by us knowing not so much. Yes, there is quite a bit all right, but not as much as it will at first seem. Wherever we could relate these critical propositions we (mankind) have delved deep on the spot. The difficulty of number theory lies less in its breadth than its depth. Group theory, for instance, comfortably encompasses number theory (as well as other forms of mathematics), not the other way around.

Once one knows these critical drawstrings (eyebrows hunched conspiratorily) one can see how they are tightening up numerous other propositions. I may be oversimplifying, but make no mistake _number theory is difficult and is known to be_, it has that reputation given by the masters themselves.

What I am gaining is probably an overview. Everything goes back to a few propositions, and if they were not true then none of the further explorations would be either. *Deep propositions always keep a tether line to simpler ones*, is another way, perhaps, of expressing the same thing. Once you know what they are tethered to, the understanding of overview begins to set in, must be what is happening in my brain. 

* * * * *

Allow me to correct a small notational error from earlier. I said that for prime numbers that the anti ф(n) and σ0(n) were the same, but ф(n) is defined as the numbers_ less_ than n which are relatively prime to it, _not less than or equal to it_. 

I have never heard of such a thing, but we could define this anti ф(n) to equal n-ф(n).

Then [anti ф(n)]+1=σ0(n), where n is a prime number, and the connection remains between ф and σ.

----------


## YesNo

The totient or ф(n) where n is prime would be n - 1, that is the number of integers relatively prime to n and less than or equal to n. If n is prime all n - 1 of them are relatively prime to n. This article also defines a cototient as n - ф(n). That should be 1 for n prime since only n would have a prime factor, itself, in common with n. https://en.wikipedia.org/wiki/Euler%...tient_function

The σ0(n) where n is prime would be 2, namely, 1 and n. Those are the only divisors of n if n is prime. https://en.wikipedia.org/wiki/Divisor_function 

Edit: I just saw your most recent post which I think is saying the same thing.

----------


## desiresjab

> The totient or ф(n) where n is prime would be n - 1, that is the number of integers relatively prime to n and less than or equal to n. If n is prime all n - 1 of them are relatively prime to n. This article also defines a cototient as n - ф(n). That should be 1 for n prime since only n would have a prime factor, itself, in common with n. https://en.wikipedia.org/wiki/Euler%...tient_function
> 
> The σ0(n) where n is prime would be 2, namely, 1 and n. Those are the only divisors of n if n is prime. https://en.wikipedia.org/wiki/Divisor_function 
> 
> Edit: I just saw your most recent post which I think is saying the same thing.


They call that a cototient, eh?

----------


## desiresjab

Pell's equation is an object worth pondering.

x2-ny2=1,

is the equation of a hyperbola. Remember those conic sections you studied somewhere along the line, folks. This is an equation for one of them. It is not really a Diophantine equation. But we simply make it so by declaring we are interested only in its integer solutions. We already know n is an integer, we insist that x and y be integers as well, that is all. When we look at the ratio y/x, we see that successive solutions give better and better approximations of √n. It is one way to get the square root of a non square integer.

If we wanted the square root of 2, we would substitute 2 in the equation for n:

x2-2y2=1. The ratio of x and y in the integer solutions of this equation will now give better and better approximations to the square root of 2, as x and y grow larger.

Besides being a conic section, which we are know are important continuous functions, this equation leads a double life as a Diophantine, where it can do one little job a continued fraction does in a fraction of the time, I assume.

----------


## YesNo

Here's more on Pell's equation: https://en.wikipedia.org/wiki/Pell's_equation

I found an old book by Ivan Niven, "Numbers: Rational and Irrational", that has two final chapters on rational approximation of an irrational number and a proof why the Liouville number is transcendental. I am going through the exercises on those two sections to make sure I understand it. The earlier chapters are elementary. I think it was written for high school students, but even elementary material can be illuminating.

----------


## desiresjab

> Here's more on Pell's equation: https://en.wikipedia.org/wiki/Pell's_equation
> 
> I found an old book by Ivan Niven, "Numbers: Rational and Irrational", that has two final chapters on rational approximation of an irrational number and a proof why the Liouville number is transcendental. I am going through the exercises on those two sections to make sure I understand it. The earlier chapters are elementary. I think it was written for high school students, but even elementary material can be illuminating.


Sometimes elementary material can be very illuminating. Maybe this is because a lot of the formality has been stripped away, making the essential ideas easier to see.

----------


## desiresjab

> What I use are Google docs and sheets for my personal documents. You just need a Google account to get that. It is all in the browser. You can also use mathjax with it after installing a plugin, but I use jupyter notebooks for mathematics with the underlying python kernel so I can calculate right in the notebook. I also use Google to back up all my photos on my phone as well as copy them to my computer (Windows 10).
> 
> I was reading more about Lehmer numbers. Any Lehmer number is also a Carmichael number. I can see why Carmichael numbers exist and Lehmer numbers probably don't. The Carmichael function lambda(n) is smaller than the totient, phi(n), and so it has a better chance of dividing n - 1. For example, the Carmichael number, 561 = 3*11*17, each of 2, 10 and 16 divide 560, but not their product.


Carmichael numbers are such unweildy beasts it would be semi-amazing that anyone found an example before the age of computers if not for individual examples of human industry and tenacity that far exceed the effort needed to find one of these. Euler and Gauss did incredible calculations in their heads. But still, Carmichael numbers were not predicted by theory, were they? What were those industrious early explorers who found examples looking for? I mean, why were they looking? What was there for them to have faith in, since no theory I am aware of said there would be composites which behaved just like primes when put through the machine of Fermat's Little Theorem? It quite intrigues me. I suppose they were looking because no theory said there_ could not_ be such composites. Or, if at that time Fermat's Little Theorem was believed to possibly be a reliable test for primes, it makes sense that some eager beavers would have been engaged in the pursuit of a counter example. What about that?

----------


## YesNo

I wonder if there are shortcuts to determining if a number is a Carmichael number?

I don't know the history, but there are many composite numbers that are pseudoprimes to one base or the other. All odd composites are pseudoprimes to base 1 and n - 1 since 1n-1=1 mod n and (n-1)n-1 = -1n-1 = 1 mod n where n is odd. It seems to make sense to look for something like this: Can one find a composite number that is a pseudoprime to every base relatively prime to the composite number?

Carmichael had Korselt's criterion (1899) to lead the way before he found one in 1910: https://en.wikipedia.org/wiki/Carmichael_number

----------


## desiresjab

> I wonder if there are shortcuts to determining if a number is a Carmichael number?
> 
> I don't know the history, but there are many composite numbers that are pseudoprimes to one base or the other. All odd composites are pseudoprimes to base 1 and n - 1 since 1n-1=1 mod n and (n-1)n-1 = -1n-1 = 1 mod n where n is odd. It seems to make sense to look for something like this: Can one find a composite number that is a pseudoprime to every base relatively prime to the composite number?
> 
> Carmichael had Korselt's criterion (1899) to lead the way before he found one in 1910: https://en.wikipedia.org/wiki/Carmichael_number


Merry Christmas to you, good sir, and to everyone reading.

I see why from the beginning the search was on for the beast eventually named Carmichael number. Many (all?) composites have a subset of their residue system consisting of "some" of the residue classes (more on "some" later). This special subset of numbers ({q1, q2, ...qk}) when raised to the n-1 power leave a residue of 1, just as numbers would do under a prime modulus. These q's are called false witnesses or liars.

No composite number n was known for which *all* its n-1 residue classes were false witnesses. That is precisely what a Carmichael number is. 

Raising every number between 1 and 561 to the 560th power was quite a bit of work in the days before computers, to find that 561 was a Carmichael number. Smart investigators, however, would have known, I suspect, that after surpassing a "certain number of" (more on "certain number of" later) false witnesses in their calculations, 561 was a Carmichael number, precluding the necessity of doing all 559 calculatiions which are not trivial.

----------


## YesNo

It is not all n-1 residue classes that are false witnesses to make a Carmichael number. Only those that are relatively prime to n. In the case of a Carmichael number, which are squarefree, one would have to get a factor for Fermat's criterion to be accurate, but it wouldn't be accurate for an = a (mod n). That would still work.

Consider 561 = 3*11*17, a Carmichael number (assuming the python is correctly programmed):

3561 = 3 mod 561, but 3560 = 375 mod 561
11561 = 11 mod 561, but 11560 = 154 mod 361
17561 = 17 mod 561, but 17560 = 34 mod 361

There are seem to be at least three layers of tests each restricting the exponent of the witness a bit more: 

an = a mod n 
an-1 = 1 mod n Fermat's test
a(n-1)/2 = (a/n) mod n Euler's test

----------


## desiresjab

> It is not all n-1 residue classes that are false witnesses to make a Carmichael number. Only those that are relatively prime to n.


Yes, of course. You have to excuse me-- I am so used to working with prime moduli that I sometimes unconsciously lapse into that mode. For a Carmichael number all numbers in ф(n) must bear false witness. 





> In the case of a Carmichael number, which are squarefree, one would have to get a factor for Fermat's criterion to be accurate, but it wouldn't be accurate for an = a (mod n). That would still work.
> 
> Consider 561 = 3*11*17, a Carmichael number (assuming the python is correctly programmed):
> 
> 3561 = 3 mod 561, but 3560 = 375 mod 561
> 11561 = 11 mod 561, but 11560 = 154 mod 361
> 17561 = 17 mod 561, but 17560 = 34 mod 361
> 
> There are seem to be at least three layers of tests each restricting the exponent of the witness a bit more: 
> ...


I think I see all that. I would not have been able to do your three lines of calculations to get those results without a calculator. I suspect I should be able to do it through manipulation alone. How did you get those values?

Okay. I think I see much of what is going on with Carmichael numbers. The thing I do not see is why a Carmichael number was eventually inevitable. If it happened on the number line, it was inevitable. Do you see why it was inevitable? What would one look at in Euler or Gauss's position to know a Carmichael was inevitable if you went out the number line far enough?

----------


## YesNo

There is a lot I don't intuitively see about Carmichael numbers. I don't know why they should exist, but there is a proof that infinitely many of them exist. I haven't read that proof and I don't even know how I would try to show something like that.

As far as getting those values, I put them into a jupyter notebook, created a function and ran the function. I wouldn't trust the result without checking it there. I don't have a proof that all Carmichael numbers should behave the same way, only that 561 does.

def car(base,num):
return([(base**num)%num, (base**(num-1))%num])

----------


## desiresjab

> There is a lot I don't intuitively see about Carmichael numbers. I don't know why they should exist, but there is a proof that infinitely many of them exist. I haven't read that proof and I don't even know how I would try to show something like that.
> 
> As far as getting those values, I put them into a jupyter notebook, created a function and ran the function. I wouldn't trust the result without checking it there. I don't have a proof that all Carmichael numbers should behave the same way, only that 561 does.
> 
> def car(base,num):
> return([(base**num)%num, (base**(num-1))%num])


I would like to ask some questions about your academic career. It sounds like you were a math major, since you have degrees there. What was your emphasis? I assume you are at least familiar with real analysis, complex analysis, differential equations (though you may have forgotten some due to not using), whereas all I ever had beyond college algebra, trig and analytic geometry was one year of calculus. There are times I definitely feel the desire for more formal education in math.

----------


## YesNo

Yes, I remember taking classes in all of those subjects. My main interest was computational problems and hence the focus on data analysis. I can remember not liking the big-O notation which estimated an upper bound on solutions as n got large. I wanted to know how many solutions there actually were and how fast could one compute them.

The formal education has advantages. It opens up job opportunities. It gets one in the habit of writing in a certain way. It focuses attention on research papers. 

However, just having someone to talk to about these topics is very useful. I would not even be thinking about them now, if you weren't bringing them up. That is where someone who is an academic would have an advantage over both of us. Not only are they trained but they are among a community who are trying to publish new research.

----------


## desiresjab

I want to look at the repetends of decimals. Ay first glance it seems there would be a complete theory, but I think not.

We can say something about decimal numbers before they are even calculated, however. We are able to say certain things. If a prime number which ends with the digit 7 has a period of p-1 when decimalized (1/p), the digits in its decimal will have an equal number of the digits from 0 to 9 except that the six digits in the expansion of 1/7 will occur one extra time. 

I suspect something similar can be said about primes ending in other digits, but have no proof yet or even a demonstration.

The period of the decimal expansion does not appear to coincide with anything familiar except the factors of n. Which factor on sight the period will emulate seems to be a problem of depth.

For 37, the period is only three, which is neither the least or the greatest factor of 36. Yet 31 has a period of (p-1)/2, which is the greatest factor of 32 smaller than 32. This is not some strange number to us.

Like everything else in number theory, I suspect knowledge of repetends is not complete and ceases somewhere--to be precise, right where our inability to completely master primes begins. This will be a very nice surprise if you can inform me otherwise here.

P.S. It does not agree with the order either, the order of 4 being three.

----------


## YesNo

Here's something on decimal expansions. I have only read the first page: http://people.csail.mit.edu/kuat/cou...expansions.pdf

Continued fractions also have repetitions that might be useful.

----------


## desiresjab

> Here's something on decimal expansions. I have only read the first page: http://people.csail.mit.edu/kuat/cou...expansions.pdf
> 
> Continued fractions also have repetitions that might be useful.


I found that a pretty tough paper. One expects no less at the graduate level. On the other hand I see most of what is going on in it. Primitive roots mod (p) appear to have periods p-1, if I interpreted it correctly. Of course we already knew that about primitive roots, at least what their period would be, powered up. The good part is that behavior carries over to the decimal expansion.

----------


## YesNo

I haven't had time to finish it. I am wondering how they will prove the main result on the first page that given an integer d and a base a (not equal to 2) there is a prime p such that the length of the period of the expansion of 1/p is d. I would not think this would work for any d and it doesn't seem to work for base 2. Supposedly there is no prime p that has period of length 6 in base 2.

----------


## desiresjab

I guess it is a curious fact:

(6k+1)(12k+1)(18k+1) may be a Carmichael number whenever all three factors are prime, is how I took it. Is it sufficient but not necessary? The curious fact is that 561 is not of this form, for it is (3)(11)(17). There must be more than one breed of Carmichael number. It goes to show that in the deep structure of numbers things are never quite simple. I may never know why a Carmichael number just had to be, or whether in the time of Euler and Gauss it was highly suspected there were such beasts. I do not see 561 defeating either of that pair's calculating ability, so perhaps they were not too suspicious in their day. Was Fermat's little theorem considered a good primality test then? I do not see either of them falling for that, especially Gauss, who only showed winning hands. It is kind of a nice historical question apart from understanding the technically difficult parts.

As I look at the complex formulas for Carmichael numbers and repetends, I see familiar friends like the G.C.D and the phi function filling them out. Sometimes there is that big O term you were talking about, the error term, which comes last and expresses the probablility of error. I think Hardy and Littlewood made a formula for the density of Carmichael numbers, but it could be another formula I am confusing it with.

The link you provided for the repetends study was pretty deep. I did not like their style. I think by going over it quite a few times I can extract most of it. I might find some other articles dealing with the same subject.

I am going to need some work on why i and e and pi are in those formulas, in fact I have noticed they seem to be a staple in many of the higher number theoretic formulas I am seeing. I have to get comfortable with that. Gauss's criterion for recognizing a mathematician of the first class in the making was immediate understanding of Euler's formula eiπ+1=0. I do not qualify. I have to find out why these terms have found a home in the number theoretic formulas. I believe it must be through the complex number system, and I know some trig identities are involved.

----------


## desiresjab

Oh boy! I think I found the key statement I was looking for. It was in a Wiki-peja article.

The length of the repetend of 1/p is equal to the order of 10 (mod p). If 10 is a primitive root mod p, the reptend length is equal to p-1; if not, the repetend length is a factor of p-1.

Just how precisely those fractions which produce lengths which are factors of p-1 can be nailed down, I am not sure. It may be something found in the link you provided for repetends. But that factor which is the length of the repetend is merely the order of 10 (mod p), right? That says it all. Those are the bare facts, the rest is just proofs.

----------


## YesNo

That sounds like a good distinction. I don't understand why the base needs to be a primitive root mod p, but it should have something to do with it.

I think there are Carmichael numbers with more than three prime factors, but not less than three.

One doesn't have to understand everything in an article, nor even read all of it. I rarely finish reading things.

The i, e and pi are in the cyclotomic polynomials to get roots of xn - 1 in that article. I think that is just one way to get uniformly spaced points around the unit circle. One could use sines and cosines, but this is more compact. https://en.wikipedia.org/wiki/Euler's_identity I don't have an intuitive feel for this either. It is just a way to calculate. It is kind of like quantum physics. One can calculate and get useful results without knowing what it is one is talking about.

----------


## desiresjab

> That sounds like a good distinction. I don't understand why the base needs to be a primitive root mod p, but it should have something to do with it.


I believe because anything but a primitive root will power up to 1 (mod p) before its p-1 power. The power at which it reaches 1 for the first time is the order. The order of 10 (mod p) is the length of the repetend. Of course 10p-1 is not exactly easy to calculate for large primes. After one had gone as far as (p-1)/2, the largest possible factor of p-1 except for itself, one could safely conclude they were dealing with a primitive root (mod p).

----------


## YesNo

Yes, I see that now. The primitive root will be able to generate all relatively prime values less than the prime p and there are p-1 of them.

----------


## desiresjab

What I do not see is the structural similarity between various types of Carmichaels. That (6k+1)(12k+1)(18k+1) criterion was of course for a three-factor Carmichael. But that three-factor Carmichael (3)(11)(17) is not of the above form, and I have no idea whether an infinite number of Carmichael numbers are of a different form than the one above. So what form is it? I have not located the unifying principal between all Carmichael numbers. There are infinitely many of them, and even infinitely many Carmichaels with any number of factors you care to name. It seems to me there has to be some principle unifying all Carmichael numbers. It is probably sitting right in front of my nose and I cannot see it. I will see it, at least I think so presently.

----------


## desiresjab

This unifying principle between *all* Carmichael numbers *of any form*, may be what the Chinese civil servant recently tapped into. I feel it is tappable. I feel you and I have to tap it now, since we went ahead and challenged it, asking our not so innocent questions, and we are sure to be named wusses if we back away now without the answer. We did not back away from quadratic reciprocity until we knew there was at that time no more getting to be had from it with the tools we were using. I dread these prolonged struggles because I am lazy and always look for an easy way, in slovenly accordance with Occam's razor.

It would certainly be nice if 561 were a single rogue example and all other Carmichaels were of the above form. I highly doubt that, but do not yet know that it is untrue. I have a heuristic theory of rogue solutions underway right now which I hope to present ri'cheer in the near future.

----------


## desiresjab

A more careful reading of Carmichaels reveals that (6k+1)(12k+1)(18k+1) produces a *subset* of Carmichaels when all three factors are prime. This was proved in 1939. It is not yet proven that this is an infinitely repeatable subset, but highly suspected.

The fact that there are infinitely many Carmichael _ideals_ sounds a bit prohibitive for finding _one nature_ that defines them all. I do not know enough about the theory of ideals yet. There may not be an approach to them that is not laden with abstract algebraic notations. Perhaps we should proceed as if there were _one nature_ to be found, until otherwise is shown which we can recognize.

----------


## desiresjab

And here is a highly provocative statement from Wiki-peja:

Since infinitely many prime numbers split completely in any number field, there are infinitely many Carmichael ideals in OK. For example, if p is any prime number that is 1 mod 4, the ideal (p) in the Gaussian integers Z[i] is a Carmichael ideal.

----------


## YesNo

Which Wikipedia article were you reading?

----------


## desiresjab

> Which Wikipedia article were you reading?


I l,ooked through the stuff I read. Could not find the quotes I gave. It was one of the following links. The one on category theory is killer abstract. It is hard to find a good entry point for any one of these subjects because they are all inter-related and used in the definitions of each other.

https://en.wikipedia.org/wiki/Category_theory

https://en.wikipedia.org/wiki/Injective_function

https://en.wikipedia.org/wiki/Ideal_class_group

https://en.wikipedia.org/wiki/Carmichael_number

https://en.wikipedia.org/wiki/Ideal_(ring_theory)

https://en.wikipedia.org/wiki/Abstract_algebra

https://en.wikipedia.org/wiki/Quotient_ring

https://en.wikipedia.org/wiki/Fundam..._homomorphisms

https://en.wikipedia.org/wiki/Surjective_function

----------


## YesNo

Those articles like a good starting point. I haven't read read all, but I will start with the one on Carmichael numbers.

----------


## desiresjab

> Those articles like a good starting point. I haven't read read all, but I will start with the one on Carmichael numbers.


I will be going over them again and again, lad, combing for details I can uptake. The only thing I can see which unifies all Carmichael numbers is the criterion that the p-1 of all primes dividing n also divides n-1. That seems to be it. It would be wonderful to find more connections unifying them. There might be some other unifying principle which would significantly lessen the work involved in computer searches for them.

* * * * *

Several more things to mention. The first is the difficulty of the math now facing us. How difficult for you I do not know, but for me quite difficult are the great abstract fields ahead. There can hardly be enough preparation. I am like a wolf that has been picking off theorems from the outskirts of the herd for years. I rather know what to expect and still find the slogging torturous. Those articles created far more questions than they answered for me. The basic idea of an _ideal_ seems simple enough at first presentation. The idea of a field consisting of multiples of single number such as 2 or 3, is easy to grasp. For instance the even numbers. Every multiplication or addition of any two elements in the field produces another element of the field, a basic requirement of groups. Easy, right?

But then one learns that Carmichael numbers also form an ideal field (my language is not yet straight). And one says, _What_? I thought ideals were simple multiples of a single number. Carmichael numbers are not multiples of one another that I can see.

So I have a long ways to go to get the basics of this higher mathematics under control. If Category Theory is not higher mathematics, then I do not know what is. If Category Theory is still elementary mathematics, I am, sir, a moslem's uncle. From what I have seen so far it offers the highest and most abstract view available in mathematics. I have been toiling in the grease of the gears of raw numbers for too long. I now must attempt to take the step to the lofty views which dash away whole categories with a few slashes of chalk.

----------


## desiresjab

The other topic today is the prevelance of mathematical genius. I hope some who normally only read here will pitch in with some ideas.

Are there talents equivalent to Gauss, Euler, Newton and Ramanujan in the world today? If not, why not?

There are more people in the world than ever and that means more researchers than ever. Statistically, we should have math men as great as those aforementioned in the world today. But one gets the feeling a Gauss or Newton might have already completely realized the mathematics of string theory, tying it to physics and forcing the paradigm shift in thinking. There are great mathematical minds in every age. Of course we have them now, too.

Fifteen and seventeen year old thinkers do not make major discoveries anymore. They might make a small one here and there. Perhaps the period of preparation needed to bring oneself up to specs on contemporary research topics is so long it precludes that happening anymore. At eighteen one could make a reasonable argument that Gauss and Newton were already the world's best mathematicians. I doubt if this is still possible, but I hope it still is. I would love to see some fifteen year old force the world to shift its paradigm of reality.

----------


## desiresjab

One limiting factor of mathematics is the lack of poetry. Yes, I said _lack of poetry_. For poetry consists always of employing one word in a context where it is superior to all others. Mathematicians are decent at choosing appropriate symbols, but are often worse than mediocre when it comes to choosing terms in a spoken language, and downright awful for accepting as standard some of the terms they have.

Either the term *order* refers to the power (mod p) to which one must raise the constant _a_ until the value wraps back around to one, or it refers to the number of elements in a Finite Galois Field. Which is it, math boys? You boys do not even care that these two branches are closely aligned, and expect newcomers to put up with your perfectly avoidable ambiguities as if they did not exist. Ignore them, you teach.

Let the student instruct the master, the master whose purpose is to eliminate ambiguity. In how many ways in how many branches must you idiots continue to use the words _Order, Class, Congruent_, et al, to mean different things? I can find such examples all over mathematics.

* * * * *

While we are at it, I do not think the symbol for pi had ought to be used for anything in math other than the ratio of a circumference to a diameter. Admittedly, there is a genuine paucity of available symbols not found in the actual language where these symbols combine to form words, many of which are synonyms, so there is more excuse for bad notation in math than bad poetry where a multiplicity words are available.

The notion will make more sense to me (in Group Theory) if it turns out that it equates to the power we must raise _a_ to (mod p) before it becomes _a_ again. This would be just another angle on Fermat's Little Theorem, the unreduced version where ap≡a (mod p). Is that what is going on with the confusion of words mathematicians have chosen? I could accept that linguistic disparity because both notions of *order* are at least based on the same theorem in two different forms, those two forms being:

ap≡a (mod p), and ap-1≡1 (mod p).

----------


## YesNo

I don't know much about category theory. 

We have been traveling to avoid the cold in Chicago, although it was rather warm there last week. I just got the computer connected to the internet again.

There are a lot of overused words in mathematics as you mentioned like "order". Whatever they refer to in some context should be unambiguous or one won't be able to use logic to show something is false or true.

----------


## desiresjab

And speaking of ugly and unimaginative...the slash /, should never be used in mathematics to mean anything other than simple division. In the group theory I now need to look at they seem to use it all the time for something else. I hate this. I detest it. It makes mathematicians look stupid in their own way.

----------


## desiresjab

While we are waiting for formal higher math to reveal the best port of entry for my weaknesses, let us take a look at the Euler phi function, with the eventual purpose of understanding precisely a modrern encrytion system such as RSA through this function. We looked at this function before but it always deserves more. I do not remember what we said before.

The function is always even. For an even number it is easy to see why by inspection of this formula for ф(n):

ф(n)=nΠp|n(1-1/p).

Notice the right side is multiplied by n, which we already know is an even number. Case closed. 

When n is odd, that is where there are no powers of 2 in its prime factorization, the following formula for a power of a prime makes the answer transparently obvious:

ф(pk)=pk(1-1/p).

That subtraction on the right *must* result in an odd number minus an odd number, which we know is always even. Case closed. The ф function is always even.

* * * * *

I am having a difficult time penetrating very far into ideal numbers. I can see it will take more than a few new tools. One thing about the ф function is that it is always even. I may be a dummy, but I believe that allows me to say the function could be stated in terms of the ideal of the even numbers. The function does play a significant role in ideal theory, if I am not mistaken. No surprise there. That is what great functions do--show up everywhere.

The elementary functions are so interrelated one could probably state any of them in terms of the others. The Euler ф function would be no more than a special and quite extended exercise or circumstance of the divisor function. That would be quite a mess. Thankfully, there is a division of labor among a bevy, a cluster, of closely related elementary functions, parcelling out the various applications and implications. This is much nicer.

I can already tell anyone riding along and reading who wants to follow the mechanical details of the encrytion process, that it will boil down to some tricks of exponents on ф, almost magic-like tricks, where you have to follow the bouncing ball. Even after you see it, it can get away from you again in the next moment. But once you do see it, you will be able to retrace your steps and see it again, even when you lose sight of it.

Does this mean I see it? Actually, friends, I do not. I forgot. I used to know more about than I do now. I followed the reasoning in detail once and saw that it was a game of exponents and how they worked in a round robin of substitution until you arrived back where you started. The ф function was never mentioned way back then in the article I read. It is not the only function that encryption systems have been based on, but I believe it is used in RSA which is the most widely used security system.

If we can make the mathematical mechanics of RSA perfectly and intuitively transparent to ourselves, we have the right to toy around with ideas for an encryption system of our own devising, but not before, certainly not before!

----------


## desiresjab

Eventually, Yes/No will drag back to the big shoulders of frigid Chicago and give us some good direction and pointed tips. He is a mathematician and computers are his gig. He may have made the encryption process completely transparent to himself already. It is hard telling what a mathematician will get himself up to and involved with. Myself, I am retired so allowed the luxury of going only where I think it matches my desires--one cannot always tell in advanved _foreign_ territory--and my desire is to understand numbers (from natural to complex) on the deepest levels I can. 

Instead of prodigy, Paul Erdos called one who remained intellectually active in their dotage a _dotagy_. I am a dotagy who feels that not only the universe but we ourselves and our consciousness are underlain by deeper layers of unrevealed structure, just like mere numbers are continually rediscovered to be. To us, these deep layers are necessarily more complex, or we would have discovered them first. To me, they possibly hint of what men have correctly or erroneously attributed to _objects_ like spirits and souls in times past. I am one who feels it is probable there are structures of increasing complexity within us that explain age old mysteries and legends, myths, dreams, ESP, prescience, et al. I am trying to edge closer to it before I die. Maybe there is some advantage to be had from pursuing the deeper structures as a pure amateur who will never publish a math paper. I wrote novels, too, that I believe no one will ever read. But in writing them I felt that perhaps I could edge closer to Yeats' golden Byzantium of artifice and creation. What I mean is, I felt that they might have counted in some deeper way as part of another structure, whether any person ever read them or not. Byzantium would recognize me at my death. Okay. Pretty weird.

Back to numbers.

----------


## desiresjab

The general idea is this: Multiply two gigantic (and I mean gigantic!) primes together, and no one can tell from looking at the result which the hell two numbers you multiplied to get this result.

There are a couple of reasons:

(1) You are in Gigantic territory.
(2) Factoring numbers is inherently hard.

That explains how at the heart of modern e-encryption systems lies the problem of factoring a large number. You--the person trying to break the code--even have some advantages, it would seem at first sight. First, you know it is a composite and not a prime, though it looks exactly like the kind of number computational number theorists would put in great work upon to determine its primality or compositness. Factoring is hard, especailly when your two prime number factors are in the neighborhood of four hundred digits long apiece. You know there are only two and you also know they are _relatively_ close together, i.e. not all that far from the square root of the giant product--and you still cannot do it, even with your computer. If you had a quantum computer, yes, you could do it.

* * * * *

_Okay, Jabby boy_, you say, _how does the ф function come into the picture_?

Well, you see, being able to hide what those big factors are is only part of the job, in fact it is the easy part. For anyone can multiply two gigantic primes together and you will not be able to factor them, whether you are the world's best mathematician or the Cray super computer. The clever part was turning the ability to hide those factors into a way to transmit a secure message. This took some footwork worthy of Jersey Joe Walcott. It is like mathematical sleight of hand.

The inventors of RSA truly invented their system, but independently, as they say in the sciences. The work of the guy who got their first, decades before RSA, was immediately marked classified, or his grandchildren might now be vastly rich.

It usually turns out that some smart people got there early in the game, so to speak, before their time.

Anyway, what is needed after multiplying together two extremely large primes, is a way to carry a secure message with the even larger product.

Ask yourself this as you ponder the things above and I go to my slates:

If given the ф of an enormous number but not the number itself, could you determine the number?

If given the ф of an enormous number and given the number as well, could you determine its factors, since ф(pq)=ф(p)·ф(q)?

----------


## desiresjab

Okay, I think I have a good handle on the basic mechanics of RSA. What I want to do is pare the explanation down to a simple minimum. It will not be the full presentation of how the code is implemented into ASCI, etc., but a bare revealing of the mechanics. 

All this is for a later post, once I figure out the best presentation and clear up any lingering questions I myself have.

RSA is indeed clever, but easily understandable with the precise tools we have been using in this thread for months. It is only commutative modular arithmetic in a finite field or ring, to throw some fancy words around. It depends on being able to find the modular inverse of a number with respect to ф(n) when ф(n)=ф(pq)=ф(p)·ф(q) and you only know n but not p or q.

I will try to set it up so it is easy to understand.

----------


## YesNo

If I remember right, the slash / originated with Fibonacci to represent rational numbers. It took some time to get a decimal representation of them.

Since I am too old to be a prodigy, I'll have to try for a dotagy.

Robert Prechtner uses the ratios of consecutive integers of the Fibonacci sequence to show how this might affect our herding ability as well as other things in nature. I think there is something to this. It is called Elliott Wave theory.

----------


## desiresjab

> If I remember right, the slash / originated with Fibonacci to represent rational numbers. It took some time to get a decimal representation of them.
> 
> Since I am too old to be a prodigy, I'll have to try for a dotagy.
> 
> Robert Prechtner uses the ratios of consecutive integers of the Fibonacci sequence to show how this might affect our herding ability as well as other things in nature. I think there is something to this. It is called Elliott Wave theory.


That sounds like a likely place for the slash to have begun.

----------


## desiresjab

Here is my take on RSA encryption. If I am wrong Yes/No or someone else will set me straight.

First, here is the idea in a nutshell. When you raise m to the d power and then to the inverse of d, you end up back at m, the actual message being sent. In this case the inverse is very hard to find because you do not have the right information.

Dusty would like to receive secure messages from Garret, and Mandy would like to know what is in them. So Dusty finds two really large primes p and q, and multiplies them together to get n. Then he calculates ф(n), which he can easily do because he knows what p and q are, and their ф is merely (p-1)(q-1). He then chooses a number d relatively prime to and smaller than ф(n) to use as a power. Then he declares to the world that anyone wanting to send him a message must only raise that message to the power of d first. When Dusty receives a message from Garret it is raised to the power of d, so no one else is able to make sense of it. Dusty can decrypt it because he knows the inverse of d which we call e.

Let the message be m. Garret only wants to send the number 7 to Dusty. He is telling him that afternoon's horse race will be fixed and where to put his money.

Garret makes this number: 7d, and sends it to Dusty. 7d is some other number, very large, certainly it is not 7. All Dusty has to do to know which horse to bet on is this: 7de=7, and we are back at 7, the original message, because d and e are inverses (with respect to ф(n)), and therefore must equal 1 when multipled together as exponents in the term 7de, and of course 71=7. Dusty deciphers the message by simply applying the inverse of what is called the public exponent, for that is the power everyone knows they must raise their messages for Dusty to.

Mandy, who is watching all this, cannot figure out what the message is. All she knows is that Dusty raised some number, to a very large power, and she even knows the power, which itself is an extraordinarily large number, and she even knows the number n. If she only knew what p and q were she could calculate the value of ф(n), and then she would know how to get the inverse of that huge number d in (7d) with respect to ф(n) that she sees Dusty received. 7de would put her right back at 7, the content of Garret's message to Dusty.

* * * * *

The numbers involved in reality are beyond huge. Depending on the sensitivity of the application, some might have upwards of a thousand digits. To get the ball rolling, encryption folk like the public exponent d to be 65537, but only when that value is relatively prime to ф(n), otherwise they have to choose the next prime number. When Garret raises his message to the 65537th power, the result is a fairly large number, to use an understatement. Mandy cannot raise this number to the inverse power of d with respect to ф(n) because she would need to know the values of p and q to calculate ф(n), which would then make her task easy. Without knowing ф(n), she is reduced to brute force approaches. When the numbers are truly gigantic, brute force can take a long, long timelike the age of the universe or longerto bring about the correct solution.

We could all agree that 765537 is a pretty large number. It is especially mysterious for Mandy because she does not know the base is 7. So just which number did Garret raise to this large power? You can see where that might take a long time to figure out without the right knowledge which consists of the values of p and q. 

That is it. Pretty simple, but it definitely took genius to conceive of.

----------


## desiresjab

The above post is the bare bones approach. It does not say anything about all the techniques of implementation that require computer experts, or the padding schemes necessary to make encryption more secure, which require even more computer experts. I think one could devise an encryption system without being a computer expert, just from number theoretic knowledge. Let the computer guys figure out how to implement it.

My feeling is that none of the encryption systems are much different from the others in basic technique. They are all based on some common number theoretic function. We have already discovered that elementary number theoretic formulas could all be expressed in terms of one another, if we worked hard enough at it. So, at heart, these systems are not much different from one another, I suspect.

Just remember that the message we sent would be but one alphabetic character or number in a longer message, if we were sending a longer one. We chose a one character message to make the process more transparent.

----------


## desiresjab

I have finally filled the missing link (for me) in Eisenstein's lattice point proof of Quadratic Reciprocity. It is now clear that the lattice points in the lower left triangle (labeled WAXY in the Wiki-peja article) really do represent quadratic residues. I do not know why it took so long for me to put the last piece in place, but it is now clear. There are so many angles to understand QR from. I understand only one proof of the theorem, but I do fully undrstand that one at least at last. How many ways can you pick pairs from sets of 5 elements and 3 elements with no pair ever being from the same set? Of course the two sets are (p-1)/2 and (q-1)/2.

----------


## Dreamwoven

All this advanced maths is way above my head, like a new language.

----------


## desiresjab

> All this advanced maths is way above my head, like a new language.


It is above my head, too, DW. That is why I am trying to understand it. A famous mathematician once said math is an unnatural way to think. True but strange, since it seems to be the language of natural things, at the very least their superficial language. For people like Euler and Gauss it is probably not unnatural. For world class mathematicians of any era, I have no idea whether it is a natural way. I suppose it must become so after a while. Even lowly I can notice a difference in my own abilities after having stuck with math assiduously for the last year.

Some of the misconceptions I had along the way and had to correct--why, it's preposterous and laughable. Gauss or Euler or the next level down from them would never have such problems on the same material, I imagine. They invented half the stuff I am still trying to understand well. All they had to guide them were their own instincts.

* * * * *

Anyone with a cosmological thought should voice them here. This is not a math thread but only seems so at the moment. Thoughts on the subject that are not math-oriented are not interrupting anything but adding to the discussion's breadth. I do not know peoples' ages on here. I know death begins to preoccupy the mind after a certain age. Death and cosmology go together.

* * * * *

Certain questions from months ago are still haunting me. Are types of universes possible which are logically impossible to us? I don't know how one would ever answer that.

----------


## desiresjab

I see now how e, pi and i got into those number theory equations. Like Yes/No said they are for finding the _roots of unity_ for any xn-1=0, a long standing problem historically. I found an abstract algebra video where the instructor explained it with the unit circle for the _roots of uinity_ of x5-1=0. He showed how to get _roots of unity_ every 72 degrees, by multiples of one initial position equation.

They had me wondering, too. That is why I italicized them. Roots of unity are slightly different than the roots of equations we studied in high school, where we would surely be able to find two roots to x5-1=0, but what about the other three, for by the Fundamental theorem of algebra it should have five roots? Yes, even lowly 1 has three other roots. And the above is a way to get them.

I also straightened out my confusion on the use of the word _Order_. Group theory uses the term two different ways:

1 The number of elements in the group
2 The lowest power to which a group element has to be raised to return to the identity value.

The latter is actually what they officially named the Carmichael function over in number theory. The latter also roughly equates to the order of the element of a modulus ring (which we are so familiar with by now) over in number theory.

I will just chip away, concept by concept--here on groups, here on rings, here on fields, here on algebraic numbers, here on forms, here on categories--until my Frankensteinian collage starts to take form. I do not have the background to forge straight ahead. I do have a fair instinct for where to chip to get what I want, and Yes/No seldom points the right way with the back of his neck.

----------


## Dreamwoven

I wouldn't call anything "unnatural" Maths is just very difficult to wrap my mind round, is all, I don't have the education for it and find it too difficult to learn, though I am sure it can be learned. The Wikipedia item on cosmology expresses it well: https://en.wikipedia.org/wiki/Cosmology.




> It is above my head, too, DW. That is why I am trying to understand it. A famous mathematician once said math is an unnatural way to think. True but strange, since it seems to be the language of natural things, at the very least their superficial language. For people like Euler and Gauss it is probably not unnatural. For world class mathematicians of any era, I have no idea whether it is a natural way. I suppose it must become so after a while. Even lowly I can notice a difference in my own abilities after having stuck with math assiduously for the last year.
> 
> Some of the misconceptions I had along the way and had to correct--why, it's preposterous and laughable. Gauss or Euler or the next level down from them would never have such problems on the same material, I imagine. They invented half the stuff I am still trying to understand well. All they had to guide them were their own instincts.
> 
> * * * * *
> 
> Anyone with a cosmological thought should voice them here. This is not a math thread but only seems so at the moment. Thoughts on the subject that are not math-oriented are not interrupting anything but adding to the discussion's breadth. I do not know peoples' ages on here. I know death begins to preoccupy the mind after a certain age. Death and cosmology go together.
> 
> * * * * *
> ...

----------


## desiresjab

I remember reading a book on Teleology back in my early twenties. No, it was called _The Cosmological Arguments_, and had a section called the _Teleological Argument_. I think it also had a section called _The Argument From Design_. I gave philosophy the college try in those days, wading through many traditional names. It was hard wading. I did not understand a lot of it. I found Heidegger hardest of all with also the hardest name to remember how to spell. What amazed me was how men could have so many thoughts and arrive at so many fast convictions about the universe. I was wondering if I would ever arrive at one fast conviction about the joint. Lo and behold, over the last few years I finally did. You have probably heard it. Whatever there is--universe or multiverse--it cannot have come from nothing. If something exists now, this so-called _nothing_ we came from was a false nothing, for it at least contained the possibility, the potential for something to come about, and potential is not nothing. The only thing that can come of nothing is nothing.

That does not mean that it was my thought, only that I was led there as a result of my own reflections (or so it seems) rather than lifting it from someone. That said, a million people must have said it. I said and I feel it. It is one of my few hardcore convictions about cosmology.

----------


## desiresjab

I will soon be off for a few days again, traveling to visit an ancient parent who is still fully sentient. I never take a computer. My brain is on its own when I travel, which forces me back to pen and paper if I need to calculate once I arrive, for I always drive. Driving at night on lonely highways is great for deep thinking. Just make sure no elk clips your mirror off and smashes your windshield, i.e. enter those fog banks at a crawl, honk your horn to be safer (it's lonely, right?) because elk may panic or be blinded and run right toward your headlights. They forgot already they are on a road, if they ever remembered. The noise scares them out of your way, hopefully.

* * * * *

I do not want to go away leaving misconceptions. The lattice points in WAXY in Eisenstein's rectangle in the Wiki-peja article on QR proofs represent all the ways that sets of 5 and of 3 elements can combine in pairs. Each pair is not interesting in itself, but only for which triangle it lies in, each point does not map back somewhere that tells us anything. Their whole point is that there are precisely_ this many of them_, i.e. precisely φ/4 of them. Eisenstein never mentions φ, but what his four smaller rectangles are doing is dividing: φ(pq)/4, which can use the Euler phi function for prime rectangles, at least. Of course, he is working from the angle of Euler's criterion, because he has to make that connection and show that his diagram actually represents quadratic residues, at least in their correct number. Once we have understood him we can take the shortcut of φ/4 in perfect safety. It will work every time. However, it is faster just to multiply, but nice to know this other function we claim is ubitquitous (the phi function) offers us yet another example.

The diagonal should not be taken as another division by 2, as I mistakenly did some months past. It is a simple scalar, which can be a confusing word in mathematics. The diagram takes a ratio via the diagonal, nothing more. 

I am now interested again in the idea of that limiting ratio. In other words, how much relative difference between p and q will guarantee a different number of points in the two triangles of WAXY when the Legendre symbol of the two primes is even, for we realize two 4n+3 primes must always have a different number of points, regardless of their difference or ratio. With such a small slope for its diagonal we do not expect the pair (5, 41) to produce equal numbers of points, for instance. I believe I calculated them and there were 24 and 16 points in the two triangles respectively. On the other hand we know quite well (without having actually proved it) that twin primes will always have the same number of points in both triangles. We have seen it in primes that are not twins, too. This leads us to wander what the limiting ratio is. Probably an easy question. But few questions in math are easy for me.

So anyway, that is one of the things I will be thinking about in the car, all warm and lonely. I just could not live without lonliness, or maybe would not care to anymore. It seems like a real gift bestowed through cruelty sometimes. If life ends forever, I believe that would be cruel if a conscious creator had at least as much empathy as us, and I know the thought of final ending is cruel. I think we would hope for a conscious creator with a fair bit more empathy than ourselves. Minus a conscious creator, let us hope the structure of our being is so complex and deep that it happens to include an afterlife of some kind. We understand four percent of the "stuff" in the universe so far. We have probably not penetrated our own structure even that deeply yet.

----------


## YesNo

If the limit does not converge to one number there may be a set of numbers it converges to that would be interesting.

I agree that nothing comes from nothing. The way I look at it is that before the beginning there was no unconscious matter. There was nothing. After the beginning there must still be no unconscious matter. What we think is unconscious matter is an illusion. 

Part of the problem with e raised to the pi times i equaling -1 is that we think of ex as a function graphed on an x and f(x) plane and it increases exponentially. But in that case the x is always real. I used to confuse that x-f(x) plane with the complex plane, but it is different. If one goes pi radians about a unit circle one is at the -1 point a 180 degree turn or a pi radian turn. 

Math may be a deceptive way to think about reality, more than an unnatural way to think. It leads one to think that determinism and randomness are to be expected, but I don't think there is anything that is deterministic or random in the universe unless we construct it to be so, like a mathematical theory or a computer (which eventually breaks down destroying the determinism we put into the machine). 

I think there are many universes since a single universe cannot be infinite without destroying the possibility of life, but they are all the same. They would follow an evolution that is similar to a spiral rather than a circle.

----------


## desiresjab

> If the limit does not converge to one number there may be a set of numbers it converges to that would be interesting.
> 
> I agree that nothing comes from nothing. The way I look at it is that before the beginning there was no unconscious matter. There was nothing. After the beginning there must still be no unconscious matter. What we think is unconscious matter is an illusion. 
> 
> Part of the problem with e raised to the pi times i equaling -1 is that we think of ex as a function graphed on an x and f(x) plane and it increases exponentially. But in that case the x is always real. I used to confuse that x-f(x) plane with the complex plane, but it is different. If one goes pi radians about a unit circle one is at the -1 point a 180 degree turn or a pi radian turn. 
> 
> Math may be a deceptive way to think about reality, more than an unnatural way to think. It leads one to think that determinism and randomness are to be expected, but I don't think there is anything that is deterministic or random in the universe unless we construct it to be so, like a mathematical theory or a computer (which eventually breaks down destroying the determinism we put into the machine). 
> 
> I think there are many universes since a single universe cannot be infinite without destroying the possibility of life, but they are all the same. They would follow an evolution that is similar to a spiral rather than a circle.


Hello. Happy to be back.

I have noticed that opinions hardly interest me anymore. Even my own cloy my thinking. I am gorged. To me an understanding of how Carmichael numbers can form ideals is worth any number of opinions or unapproachable speculations right now. I have also been wont to speculate toward much larger pictures, and will be so again. But for now my speculations must end in mathematical truth I know is there. Full understanding of consciousness is not even remotely possible at this time. But for the assiduous, Carmichael numbers are, unless you happen to plum run out of brains. I have my hopes up that I have not run out yet.

While I was away I vaguely remembered one statement from an article I read that every Carmichael number that is not already a double of a Carmichael number will have a double which is. I may have misread that statement. But I began to wonder anyway if each Carmichael and its single double form their own ideal set of which other Carmichael numbers are not members and have nothing to do with. I do not know yet. Haven't even checked yet. I was hoping you knew. I am looking for a Trojan horse in this seige.

Either Dedekind wins or I win. It has always been that way once I become obsessed. Directed obsession is the best tool a person has, it is the best one I have found. People set limits for themselves that are not necessarily true. It is a natural habit. I am interested in my own limits, the real ones, if ultimately there are any. I do not want to do IQ tests or physical puzzles, I want to see how far I can penetrate the nature of numbers. Seeing into numbers is seeing into the universe and maybe into God. Writers have a connection too, for God just spoke everything into existence according to one old text. I take the Byzantian creative ideal of Yeats as seriously as the math ideals of Dedekind. When Billy said, "The best lack all conviction, while the worst are full of passionate intensity," I do not think he meant my intensity, or his own, which aimed at understanding. Math is rarely "Modified in the guts of the living," however, as another Billy almost said, which shows the difference in the arts. Euler was as creative as Yeats. I nestle up to both.

Lunch is over. I hear the battle horn. The seige resumes.

----------


## desiresjab

> It is not all n-1 residue classes that are false witnesses to make a Carmichael number. Only those that are relatively prime to n. In the case of a Carmichael number, which are squarefree, one would have to get a factor for Fermat's criterion to be accurate, but it wouldn't be accurate for an = a (mod n). That would still work.
> 
> Consider 561 = 3*11*17, a Carmichael number (assuming the python is correctly programmed):
> 
> 3561 = 3 mod 561, but 3560 = 375 mod 561
> 11561 = 11 mod 561, but 11560 = 154 mod 361
> 17561 = 17 mod 561, but 17560 = 34 mod 361
> 
> There are seem to be at least three layers of tests each restricting the exponent of the witness a bit more: 
> ...


I assumed what was in red after you said it, but doesn't the statement below contradict it from the first paragraph of the linked article?

"There are composite numbers n which fail this test no matter how we choose _a_...."

http://www.sciencedirect.com/science...22314X07002089

----------


## YesNo

As far as the best lacking conviction goes, it sounds like whining. But no one quotes me like they quote Yeats, which is probably a good thing.

I don't know much about Carmichael numbers except what I have explored with you. So Carmichael numbers come in pairs based on the doubling idea. That might make sense because phi(2) = 1. So Carmichael numbers can be even.

Edit: There are three tests. If one uses an = a mod n then all integers a will give the desired result for a Carmichael number a. However, if one uses an-1 = 1 mod n, then a has to be relatively prime to n for that to work. The third test should not have any Carmichael numbers although there are pseudoprimes. At least that is how I see it at the moment. I might be wrong.

----------


## desiresjab

Very good. I think you have it. 

But wait! I have to have misread. How can a Carmichael number have a double when they are all odd? Scratch that bad idea. Which means I am back to wondering how something that is not a multiple can be an ideal. There was in the beginnings an _ideal number_ theory, then there came a more general theory of ideals, which is how Carmichaels were made into ideals. At that point I believe they are talking about algebraic integers and particular behaviors in groups instead of everyday integers. The wider theory still looks like Fermat's Little Theorem with different representatives as the exponents which mean the same thing for groups as the familiar exponents mean for numbers. I will try to zero in on those group behaviors.

----------


## desiresjab

I was pleased and surprised to learn that in the ring of Polynomials all equations whose constant eqauls zero form an ideal because they all tend to zero as x vanishes toward mathematical nothingness.

Another fascinating fact was that in a ring of polynomials Carmichael numbers with only two factors occur. These are calleg Gauss-Carmichaels, or just G-Carmichaels. And I believe that _left and right multiplication_ simply refers to the non-commutative nature of multiplication in some rings and groups. I now think when a prime _splits_ it probably refers to the unavoidability of non commutative multiplication in the attempt to factor into prime ideals. The subtle differences between groups and numbers have to be observed. A ring of integers is not a ring of polynomials. It seems there are serious differences and serious similarities.

I think I read that the ideal also gives pros a reliable measure of how far from completely factorable a polynomial is in ideals. They add some exponents to get this value. I forget exactly which exponents though. A line of research begun by old Gauss two hundred years ago, I believe. It is ahead of myself and solid understanding. I often have to read ahead of myself.

P.S. Something like they add all the exponents not forced to zero when they impose conditions I forget. I cannot even remember the precise situation. Sometimes these half-memories do not work out, like the idea of a Carmichael double. I think I may have confused that with some kind of ideal double which indeed might exist.

----------


## YesNo

It is interesting that Gauss-Carmichaels can have as few as two factors. I remember a few days ago being convinced that there has to be three factors in a Carmichael number in the integers. This makes me wonder why factoring in a ring of polynomials can generate something different.

----------


## desiresjab

> It is interesting that Gauss-Carmichaels can have as few as two factors. I remember a few days ago being convinced that there has to be three factors in a Carmichael number in the integers. This makes me wonder why factoring in a ring of polynomials can generate something different.


It makes me wonder too. Maybe one has to get down among the greasy gears and watch this style of factorization for a while. I have never seen it done. I assume it is something different from the factoring one does in high school. How does it work? Can you factor such an expression for me?

----------


## YesNo

Just to make sure I am not confused I looked at this article: https://en.wikipedia.org/wiki/Polynomial_ring

A polynomial ring needs some symbol X and coefficients in some ring such as the integers. Then (x+1)(x+2) = x2+ 3x + 2 is a factorization. I don't think it is anything more than that, but again, I might be missing something.

----------


## desiresjab

> Just to make sure I am not confused I looked at this article: https://en.wikipedia.org/wiki/Polynomial_ring
> 
> A polynomial ring needs some symbol X and coefficients in some ring such as the integers. Then (x+1)(x+2) = x2+ 3x + 2 is a factorization. I don't think it is anything more than that, but again, I might be missing something.


I am trying to determine if that is precisely what they mean when they call a factor irreducible in a polynomial ring. It know it equates somehow to primes but is not quite a prime. I also suspect it perhaps has something to do with the roots of unity technique I observed on the abstract algebra video.

For instance, it seems to me that your example equation would not be further reducible because, for one criterion, it has already been factored to linear terms. I believe that is a strong criterion for an irreducible factor--that it be in linear terms. Even something like x4+1=0 has to be in linear terms to be factored irreducibly, I believe. I don't know if all equations can be. In fact, though, I may have read that some cannot be made irreducible even over the Complex numbers. There is much to learn along the way.

More intense study of groups has to be next. In the Wiki-peja article on Group Theory it states that_ Algebraic Number Theory is a special case of Group Theory, so follows the rules of the latter_. This looks like Plymouth Rock to me. It means that to get my chokehold on ideals I will next have to retreat more intensely at Group Theory. I am out of sequence in my studies, but maybe I can survive it. My Abstract Algebra is ahead of my Linear Algebra (practically non existent for me) and Group theory (medicocre to half-assed decent). I seem to be learning them all at the same time, but only because it seems necessary in the quest to nail down ideals in polynimial rings.

----------


## YesNo

That example should be completely factored. This Wikipedia article provides a good review: https://en.wikipedia.org/wiki/Irreducible_polynomial

Whether a factorization of a polynomial is reducible or irreducible depends on the ring the coefficients belong to.

There a distinction between primes and irreducibles, however, they are the same if one is in a "unique factorization domain" like the integers. This is what makes unique factorization important and not obvious even though it seems obvious. All of these distinctions can be either a pain in the rear or a delightful puzzle that one cannot put down.

Edit: In addition to not being able to be factored further, a prime p has the property that if p divides n = ab, then either p divides a or p divides b. If a natural number n could be factored into irreducible positive integers in more than one way, that would not be the case. I wonder if it is proper to define a prime by this property or to use this property to define a unique factorization domain?

----------


## desiresjab

A fellow before my last sleep was discussing in his advanced mathematics video how computers will change formal mathematics. He says some of the infinite operators in math of the past few centuries will be defined differently in this century in finite terms that can be computed. You canot compute something requiring infinite operations to get the job done.

This fellow calculated the roots of unity of x15-1 without any recourse to problematical Complex numbers which will not factor nicely. He makes use of something he calls Quadrance. He does not want to use its usual name of Norm because that is his name and Norm is used in another part of mathematics with a completely different meaning.

It appears mathematicians have been troubled over The Fundamental Theorem of Algebra since Gauss came up with it, kind of like Euclid's parallel postulated bothered great mathematician's for centuries. It turns out they had good reason to be bothered, since the postulate is not universally true. I wish I were going to be around to observe the eventual fate of The Fundamental Theorem of Algebra.

According to this fellow, _ideals_ went a great ways toward eliminating some of the conflicts in the field by allowing their own arithemetic in which complete factoring was accessible. I will have to make sure of that last statement.

----------


## YesNo

Interesting observation about the Fundamental Theorem of Alegebra. I'm trying to see how it is like Euclid's parallel postulate, but it may be.

What is the link to the video about the change away from infinite operators in formal mathematics?

----------


## desiresjab

> Interesting observation about the Fundamental Theorem of Alegebra. I'm trying to see how it is like Euclid's parallel postulate, but it may be.
> 
> What is the link to the video about the change away from infinite operators in formal mathematics?


Actually, he did not use the word _operator_, that was my bad handoff, he said away from infinite processes. I like this guy. He knows the mathematics really well, well enough to have deep convictions about the way math should be fundamentally reorganized and taught. He is a real nut in a way, a Kronecker throwback who even teaches something called Rational Trigonometry and eschews most theories in mathematics that smack of the infinite. He at least tries to make it smack less. Out of all the math lecturers I have watched on the internet he is the best, most organized, easy to see, hear and understand with the best laid down presentation. He has a ton of videos and he got me hooked. I can lead you to the right YouTube menu, and maybe the right video.

In the integers Z all ideals are principle ideals, because all ideals are a multiple of the unit. Sometimes it is hard to distinguish if they mean algebraic integers when they say integers. If integers have only this multiple type of ideal, then it must be the algebraic integers in complex numbers that allow other types of ideals. Anyway...

https://www.youtube.com/watch?v=H8xBlLWdzBE

https://www.youtube.com/watch?v=GMZoXXaOFeQ

If it was neither of those it was probably in one of his two videos on Galois theory, though later on he talks about it more extensively.

----------


## YesNo

I think he mentioned it in the first of those two lectures. He does not believe in infinite sets nor in processes that involve an infinite number of operations. The Axiom of Choice would be needed to assume one could do an infinite number of steps or choices.

I noticed in his two lectures that he called things "prime" that I would have only called "irreducible" and not prime. He did provide an example of a ring without unique factorization in the second lecture. It is good to know that such things exist because it makes one value the Euclidean algorithm for division.

I wonder why he does not like having infinite sets? In my case, I do not assume that the universe has to contain infinitely many integers for this to be true. I wonder if there is more to his objections than that.

----------


## desiresjab

> I think he mentioned it in the first of those two lectures. He does not believe in infinite sets nor in processes that involve an infinite number of operations. The Axiom of Choice would be needed to assume one could do an infinite number of steps or choices.
> 
> I noticed in his two lectures that he called things "prime" that I would have only called "irreducible" and not prime. He did provide an example of a ring without unique factorization in the second lecture. It is good to know that such things exist because it makes one value the Euclidean algorithm for division.
> 
> I wonder why he does not like having infinite sets? In my case, I do not assume that the universe has to contain infinitely many integers for this to be true. I wonder if there is more to his objections than that.


He talks about his objections more extensively in other videos. He feels that mathematics has gone the way of complex convolutions hardly anyone can understand. I only partially agree. But if there is an easier way to approach many topics as he claims, I am interested. His claim that e and i and pi are not needed to find the roots of unity of x15-1, in other words cyclotomic monic polynomials, is quite appealing. He does it on the video. I only partially followed the reasoning, but I will go back for another helping.

Something he complains heavily about is the lack of good examples in these highly abstract areas such as infinite sets. Though he does a fine job I could level the same complaint his way. Where is my example in detail of an ideal other than the simple principle ones of the integers? After extensive investigation I still have no idea how a Carmicheal number can have anything to do with an ideal. A third grader could understand a principle ideal, now show me how these other objects can be ideals when they are not multiples of a generator, or whatever. I am a little peeved at the math _industry_ myself, you might say. I have put in enough effort. Are these idiots lost in their abstractions hiding the answer or am I simply too thick? Certainly it takes time, for there are times I will read right over a key statement multiple times without noticing it holds the answer. I do not know what I am overlooking this time.

----------


## desiresjab

Continue with my own whines...

It was very neat, for instance, how they "said," (and that is about all they did) all equations that have a constant of 0 form an ideal. Okay. Neat. How is that? Are they multiples of one another? I think not. Then in exactly what way does having a constant of 0 unite them into something we call an ideal? I don't get the parameters. What qualifies to make something an ideal, and what does not qualify, for that matter?

----------


## YesNo

I might be misunderstanding this, but regarding polynomials that have a constant term 0, they would be generated by the polynomial x. Take any polynomial p out of the ring of polynomials, say p = anxn+...+a1x + a0 and multiply p by x. You will get another polynomial that has the constant term 0 because the polynomial p from the ring, which might have had a constant term a0 not equal to 0, was multiplied by x making the constant term of the product, xp, equal to 0.

I don't see, at the moment, how Carmichael numbers relate to ideals, but I suspect they are.

----------


## desiresjab

> I might be misunderstanding this, but regarding polynomials that have a constant term 0, they would be generated by the polynomial x. Take any polynomial p out of the ring of polynomials, say p = anxn+...+a1x + a0 and multiply p by x. You will get another polynomial that has the constant term 0 because the polynomial p from the ring, which might have had a constant term a0 not equal to 0, was multiplied by x making the constant term of the product, xp, equal to 0.
> 
> I don't see, at the moment, how Carmichael numbers relate to ideals, but I suspect they are.


So the constant term of x is zero but we multiply by it as if it were there anyway. It is there (invisibly) in the constant term which has to be multiplied by every term in p, so that particular row of the polynomial multiplication will be all zeros. Is this correct?

If I am not misinterpreting this, you are saying that every polynomial without a zero constant can be multiplied by any polynomial that has a zero constant and the result will be a polynomial that has a constant of zero. Is this correct?

But would not this operation put any polynomial at all in the ring?

And couldn't any polynomial with zero constant force every polynomial into its particular ring?

----------


## YesNo

I assume we want a set, an ideal, that contains all the polynomials with 0 in the last constant term. One way to get that would be to take all the polynomials and multiply them by x + 0. Some polynomials would not be in this ideal. For example the polynomial x + 1 would not be in the ideal. It has 1 as a constant term. But we could use it to get (x + 0)(x + 1) = x2 + x + 0 which is in the ideal. 

If we used x2 + 0 as the generator we would miss x + 0 in that ideal. Not all polynomials with 0 in the constant term would be in that ideal.

----------


## desiresjab

> I assume we want a set, an ideal, that contains all the polynomials with 0 in the last constant term. One way to get that would be to take all the polynomials and multiply them by x + 0. Some polynomials would not be in this ideal. For example the polynomial x + 1 would not be in the ideal. It has 1 as a constant term. But we could use it to get (x + 0)(x + 1) = x2 + x + 0 which is in the ideal. 
> 
> If we used x2 + 0 as the generator we would miss x + 0 in that ideal. Not all polynomials with 0 in the constant term would be in that ideal.


Yes, I am finally beginning to see. Any equation with 0 as a constant will generate from any other equation a multiple of itself. It can generate every multiple of itself on any other equation, but not the same multiples that most other 0 constant equations can generate of themselves, for they are different multipliers unless their difference is a trivial one of not being reduced fully.

Now, just what is the supposed infinite set here, the multiples of itself that a 0 equation can generate from one other equation, or what it can do to all equations? It must be the latter from what you say. Any equation with 0 as constant will spread that effect on multiplication. I am hesitant to use the word class right now because I know that word is probably used in a precise manner in the language of ideals ahead.

So, from all other equations it generates an infinite set by multiplying itself with them. This is the infinite class of ideals associated with this particular equation. Now another distinct equation with 0 as a constant does the same thing to all other equations, and thereby generates its own infinite class of ideals based on being a multiple of _itself_. That is how I see it at the moment. Next I need to see how Carmichael numbers form ideals, unless I have still got something wrong. I am almost there.

----------


## YesNo

The infinite set would be the set of polynomials that have the generator as a factor. 

If one uses 0 as the generator one would get the zero ideal which has only 0 in it. The ideal with 1 as the generator would be the opposite extreme and contain all the polynomials.

It seems to me at the moment that when one looks at ideals one is looking at all the multiples of a generator. Often I think of a prime p in the integers as a positive integer such that if p divides ab then p divides a or p divides b. That would be looking at a division property of the prime rather than all of its multiples. For a prime ideal P one would rewrite that as if ab are in P then either a is in P or b is in P. https://en.wikipedia.org/wiki/Prime_ideal

----------


## desiresjab

In other words the generator only makes a single copy of each equation it comes in contact with. If I have a generator equation G and another equation F, then multiplicatiobn produces a multiple of G. It is not exactly a multiple of F, because F did not have 0 as constant and the new equation does. If G multiplied itself with the F more than once it would in effect be squaring itself, which is not part of the deal.

So a generator function acts once upon each equation in the world without a zero constant, and thereby produces an infinite set of equations with 0 as constant because there are infinite equations without 0 as constant to multiply itself with.

To me this is different than a single integer generating an infinite set of intervals (ideals) on the number line, for I see nothing regularly spaced about these new equations generated by G. However, I have certainly read that *every ideal in integers or Gaussian integers is a principle ideal*. As far as I can tell, this is not true of algebraic integers, which are strictly the roots of equations. Every Gaussian integer is not the root of some equation, is it? There is some confusion still whether expositors are speaking of Gaussian integers or algebraic integers at a given time in a discussion. That is, algebraic integers would not entirely fill the lattice points of the Complex plane as Gaussian integers do. Is there any truth to this or am I misinterpreting something?

----------


## desiresjab

If I took this new equation generated by GxF and multiplied it times each integer, then I would would have infinite multiples of G from one F, just as I can generate with a plain integer, but that does not seem to be part of the definition (the deal). Here the equations (F) without 0 as constant stand in an infinite line waiting to be multiplied with G. Those single multiplications on each F generate the infinite set of ideals (equations which are all multiples of G).

Name a new multiple of G as H. Now G would divide each H it generates. But each H in turn does not have to divide every other H, anymore than 6 has to divide 9 just because they are both multuiples of 3. Though not multiples (necessarily) of one another, some of the new equations in the infinite set would have to be multiples of some other equations in the new set, just as 6 is not a multiple of 9 but is a multiple of 12. Equations in the infinite set which happen to be exact multiples of one another would form an infinite subset within the set. Whether this subset carries much meaning I do not know.

----------


## desiresjab

In the vector-ball diagram in the Wiki-peja article you linked to, am I to take it that the top purple row is actually an infinte row of pure primes? Otherwise they would be saying that only 2, 3, and 5 can generate ideals.

----------


## desiresjab

Broken back to English, it seems to go like this for ideals in commutative rings:

1 Prime ideals are of the form nZ, where n is a prime.

2 Primary ideals are composed of powers only of a prime element. This means n, the prime element, is also primary, even with the lowly power of 1.

3 Semiprime ideals are combinations of more than one prime, but which are also square-free.

I heartily agree with Wildberger that more good examples are needed. The Wiki-pega article on _Semiprimes_ is valuable just because it gives a specific example, which cuts off a lot of questions at the pass. The article notes with the required specificity for dummies that 30Z would be a semiprime ideal, where as 12Z would not be. Mathematicians act like specific examples are going to kill them or lower their princely standards. The example makes it clear that 30 is semiprime because its factors are no more than single powers of primes.

For myself, adjusting to the language of ideals will take further familiaization to become entirely comfortable. Experts often talk somewhat loosely among themselvs, and tend to continue this trend in their expositions. Most mathematicians are poor expositors when it comes to bringing their abstract notions out of the darkness for laymen or even interested amateurs.

There is a reason for this: the second job is more formidable. With another expert, talking over concepts is easy. As Wildberger notes somewhere, it is basically Santa Claus to the Easter Bunny power, a pure manipulation of symbols. When I am done, I will be able to make the notion of ideals and their ramifications clear to an interested person. If it is clear to me, I should be able to do that.

By the way, I am looking at 12Z. It is not a prime, it is not primary, and it is not semiprime either. It is white on the diagram. It must play the role of a strict composite in ideals. I don't know, I am just guessing. I will overcome many impasses and wrong notions as I continue to chip away. In the end I have to be able to perform the arithmetic of ideals as easily as I can perform modular arithmetic in integers.

----------


## YesNo

> In other words the generator only makes a single copy of each equation it comes in contact with. If I have a generator equation G and another equation F, then multiplicatiobn produces a multiple of G. It is not exactly a multiple of F, because F did not have 0 as constant and the new equation does. If G multiplied itself with the F more than once it would in effect be squaring itself, which is not part of the deal.


An ideal is a subset of a ring. If G is a generator of an ideal and F is any element in the ring then GF is in the ideal although F may not be in the ideal. In particular since G is in the ring, then GG is in the ideal. So the square of G is in the ideal, just as one would expect the square of any prime to be in the set of multiples of that prime in the integers. 




> So a generator function acts once upon each equation in the world without a zero constant, and thereby produces an infinite set of equations with 0 as constant because there are infinite equations without 0 as constant to multiply itself with.
> 
> To me this is different than a single integer generating an infinite set of intervals (ideals) on the number line, for I see nothing regularly spaced about these new equations generated by G. However, I have certainly read that *every ideal in integers or Gaussian integers is a principle ideal*. As far as I can tell, this is not true of algebraic integers, which are strictly the roots of equations. Every Gaussian integer is not the root of some equation, is it? There is some confusion still whether expositors are speaking of Gaussian integers or algebraic integers at a given time in a discussion. That is, algebraic integers would not entirely fill the lattice points of the Complex plane as Gaussian integers do. Is there any truth to this or am I misinterpreting something?


I think every Gaussian integer would be the root of a polynomial with integer coefficients. Let a + bi be a Gaussian integer, where a and b are regular integers. Note that (a + bi)(a - bi) = a2 + b2, an integer. Multiply together (x - (a + bi)(x - (a - bi)) to see if this forms a polynomial with integer coefficients. I get x2 - 2ax + a2 + b2, unless I made a mistake. So the arbitrary Gaussian integer a + bi is the root of a polynomial with integer coefficients.

A principal ideal is an ideal generated by a single element: https://en.wikipedia.org/wiki/Principal_ideal

I am asking myself if it is true that every ideal in the Gaussian integers is a principal ideal. I think it is, because of what you mentioned, but I will have to find a proof.

----------


## YesNo

> In the vector-ball diagram in the Wiki-peja article you linked to, am I to take it that the top purple row is actually an infinte row of pure primes? Otherwise they would be saying that only 2, 3, and 5 can generate ideals.


Yes, the diagram was only partial. The purple row contains all the primes in the full diagram which can't be written out. However, now that you pointed it out, that diagram mentions a bunch of terms: prime ideals, semi-prime ideals and primary ideals. They apparently mean different things and have some use value, but now I am trying to clarify in my own mind what those are.

Edit: I just saw your recent post. I think you have clarified these terms.

----------


## desiresjab

> Yes, the diagram was only partial. The purple row contains all the primes in the full diagram which can't be written out. However, now that you pointed it out, that diagram mentions a bunch of terms: prime ideals, semi-prime ideals and primary ideals. They apparently mean different things and have some use value, but now I am trying to clarify in my own mind what those are.
> 
> Edit: I just saw your recent post. I think you have clarified these terms.


If a principal ideal is generated by a single element, then primary ideals must also be principal ideals, since they are powers of a single element. Keeping all the lingo straight in order to go farther apparently comes with the territory. Specific examples are the color gold in an otherwise black and white setting. All prime ideals are primary, but obivously all primaries are not prime. I still do not know the official classification of 12Z.

Primes are both semiprime and primary, two different branches, but which makes sense because they are the geneators of everything after all (not sure about 0, however), so they _should_ generate all the branches, it seems intuitively.

* * * * *

It has diverted my attention whether *all* integers defy the Fundamental Theorem of Arithmetic over the Complex. I saw 5 factored two different ways. The same technique should work for any prime--just use the conjugate. Since a composite can be broken into prime factors (which themselves defy unique factorization), then the composite has more than one factorization as well. The number of ways to combinatorially compute the division algorithm would be a simple extension of adding in more factors, but some of which do not work together. Hence, *all* integers defy unique factorization over the Complex field.

Excuse that little aside. I needed that. I am that rusty in areas.

----------


## YesNo

I was thinking about the different terms today as well while walking. This is how I see it.

If one has a field, a special kind of ring where all elements (except 0) have a multiplicative inverse, then there are only two trivial ideals: the whole ring and the set containing only 0. We can forget about fields except as sources of examples.

So a ring has to have elements that do not have multiplicative inverses for ideals to be interesting. The integers would be an example of such a ring as well as polynomials.

Here are the definitions: 
1) Ideal, a subset of a ring generated by a finite set of elements.
2) Principal Ideal: an ideal that can have the set of generators reduced to one element.
3) Zero ideal: the ideal generated by the 0 element and containing only 0.
4) Unit ideal: the whole ring generated by a unit such as 1.
5) Prime ideal: a principal idea generated by an element p such that if ab is in the ideal then either a is in the ideal or b is in the ideal. For example, the ideal generated by 6 would not a prime ideal since 36 = 4 * 9, but neither 4 nor 9 are multiples of 6 and so they are not in the ideal. The ideal generated by 6 would not be a prime ideal, as expected.
6) Semi-prime ideal (radical ideal): is an ideal generated by a square-free integer. Here the ideal generated by 6 = 2 * 3 would be example and the ideal generated by 12 would not be an example.
7) Primary ideal: is an ideal generated by the power of a prime.

I am sure there are other critical definitions and then one needs to find out how these work in many different rings.

In the Gaussian integers 5 is not a prime because it can be factored since 5 = 12 + 22 = (1 + 2i)(1 - 2i). This is true of all primes in the regular integers that are congruent to 1 mod 4. But primes in the regular integers that are congruent to 3 mod 4 cannot be represented as a sum of squares and so they are prime even in the Gaussian integers since they are irreducible. The Gaussian integers are supposed to be a unique factorization domain which means irreducibles are primes. To construct the Gaussian integers add to the regular integers i = sqrt(-1). The example that did not have unique factorization was when one added sqrt(-5) to the regular integers. The unique factorization failed in that case, but these are not the Gaussian integers.

----------


## desiresjab

> I was thinking about the different terms today as well while walking. This is how I see it.
> 
> If one has a field, a special kind of ring where all elements (except 0) have a multiplicative inverse, then there are only two trivial ideals: the whole ring and the set containing only 0. We can forget about fields except as sources of examples.
> 
> So a ring has to have elements that do not have multiplicative inverses for ideals to be interesting. The integers would be an example of such a ring as well as polynomials.
> 
> Here are the definitions: 
> 1) Ideal, a subset of a ring generated by a finite set of elements.
> 2) Principal Ideal: an ideal that can have the set of generators reduced to one element.
> ...


I pretty much have most of that. But I forgot something critical, which I was supposed to know, which I put in blue; and I have completely overlooked something critical, if it is true, which I have put in red.

----------


## desiresjab

Then again, it seems like I have done exactly the same thing to each of these:

Factorization of 5.

(√4+i)(√4-i)=4-√4 i+√4 i+1=5

Factorization of 6

(√5+i)(√5-i)=5-√5 i+√5 i+1=6

Factorization of 7

(√6+i)(√6-i)=6-√6 i+√6 i+1=7

----------


## YesNo

In the real numbers or the rationals or the complex numbers, which are all fields, you can factor 5 in many ways. Everything in those sets (except 0) has a multiplicative inverse. They are all units. There are no primes or irreducibles. 

It is only when you have a set like the integers or the Gaussian integers or even some set of algebraic integers such as the integers with sqrt(-5) added to them, that you don't have multiplicative inverses for everything. Because not everything (except 0) has a multiplicative inverse there are elements that could be called irreducible, or if the ring is a unique factorization domain, a prime.

----------


## desiresjab

I tried factoring the number 6 down further because:

The factorization of 2=

(√1+i)(√1-i)=1-(√1-i + √1-i)+-(i)2=1+1=2

And the factorization of 3=

(√2+i)(√2-i)=2-(√2 i+√2 i)+1=3. Therefore

(√1+i)(√1-i) *x* (√2+i)(√2-i),

is just another expression of 2 times 3, and does work by my clumsy calculations, but only 8 of the possible 24 permutations of the factors, which we can name ABCD, will produce 6. 

(AB)CD, (AB)DC, (BA)CD, (BA)DC and the palindrome (here I mean reverse order) of each works, for a total of eight. Any other order of factors does not produce 6 for me, but those eight orders do. I do not know if this is enough to be defined as a generalization of commutivity in the Complexes, or if it signifies the Complex version of non-commutivity. From an integer standpoint it seems some commutivity and some non-commutivity are involved.

----------


## desiresjab

> In the real numbers or the rationals or the complex numbers, which are all fields, you can factor 5 in many ways. Everything in those sets (except 0) has a multiplicative inverse. They are all units. There are no primes or irreducibles. 
> 
> It is only when you have a set like the integers or the Gaussian integers or even some set of algebraic integers such as the integers with sqrt(-5) added to them, that you don't have multiplicative inverses for everything. Because not everything (except 0) has a multiplicative inverse there are elements that could be called irreducible, or if the ring is a unique factorization domain, a prime.


I may require a slight perspective change. Water is trying to soak into plastic here. It just may be that I am so used to thinking of commutative rings with a modulus involved, that I need to step back and temporarily release the notion of a modulus ruling the ring, to see ideals more clearly. With a prime modulus in normal integers every element of the ring will have an inverse and no two elements the same inverse, I believe, as well. In a modulus ring of integers there is some notion of divisibility, whereas if one must strictly be in Integers, no inverse could exist. But stick a modulus anywhere and some inverses will appear.

----------


## YesNo

If one is thinking of a modulus one might be in a field of equivalence classes of integers rather than the integers themselves. For example, Z5 would contain five equivalence classes or sets of integers. Each would have a different remainder in the integers modulo 5. This would be a field since every equivalence class or element of that field except the 0 equivalence class would have an inverse. There would be no primes or irreducibles in that structure.

It might be interesting listing different fields and then different kinds of rings that are not fields just to get a set of examples to work with.

The fields would contain the following: (1) real numbers, (2) rational numbers, (3) complex numbers, (4) equivalence classes of integers modulo a prime integer. The only ideals here are the zero ideal (0 element) and the unit ideal (entire field).

The rings would contain the following: (1) integers, (2) Gaussian integers, a + bi where a and b are integers, (3) rings of various algebraic integers, like the Gaussian integers but instead of i = sqrt(-1) some other root of a monic polynomial, (4) polynomials with rational coefficients. Rings should provide interesting examples of ideals.

There are probably many others. The above are all commutative and so there must be some non-commutative examples such as matrices.

----------


## desiresjab

Being a ring or a field depends on the number of defined operations, I believe. Fields, as I take it, usually have four defined operations, rings two or three operations.

----------


## desiresjab

You said earlier that all interesting examples of ideals were non commutative. I do not see why that is true. I thought there were interesting examples in both, but my vision will develop as I continue. I do need to shed the habit of always thinking in modulus rings.

My thinking was temporarily derailed over the last few days by music. I made a few posts in a music thread and that got me listening again. I have to stay away from music or it derails everything else and takes over my intellectual life. I unfortunately have to segregate all of my interests like that. I can only do one field at a time.

I cannot live on human diurnal schedules. My cycle is about 41 hours instead of 24. My overlap is congruent to 17 (mod 24), with irregularities associated with obligations. Night and day are arbitrary to me. I got that way from decades of marathon poker sessions.

----------


## desiresjab

(√4+i)(√4-i)=4-√4 i+√4 i+1=5, also 1(5)=5

(√6+i)(√6-i)=6-√6 i+√6 i+1=7, also 1(7)=7

* * * * *

You assert there are more (in fact infinite) factorizations of 5. What are some? At the level above, the difference between 4n+1 and 4n+3 numbers is not obvious. The "splitting," they do must be visible from another vantage point.

----------


## YesNo

I don't know if the interesting examples of ideals are in commutative or non-commutative rings. At the time I was only able to think of commutative examples. 

I think it would depend on the ring or field whether there were infinitely many different elements that could divide into 5. In a finite field there would only be finitely many different elements to divide into 5. In the rationals, one could take every rational number and divide it into 5. These aren't really interesting because in the field 5 is a unit. It is not a prime. 

One could say that (1)(5) = 5 is a factorization, but the 1 is a unit. If one restricts the factors of 5 to be irreducibles or primes, then some rings such as the Gaussian integers would be able to factor 5 and other rings such as the regular integers would not.

Edit: Here's a list of algebraic structures. I have not studied most of them, but it is good to see them in one place with links: https://en.wikipedia.org/wiki/Algebraic_structure Maybe one day, I'll look at them more closely.

----------


## desiresjab

One thing we can be sure of: the concept of left and right ideals is applicable only for non commutitive rings. In a commutative ring such as a modulus ring, left and right ideals are the same, since x times r and r times x are not different. So any time they start tlking about left and right ideals or splitting, you know they are talking about some non commutitive object.

Everywhere I turn are statements I do not understand. All I can do is take them one at a time, putting them on the hold list until I can get to them. For instance, why and how there are exactly 21 different quadratic fields is still quite a mystery to me.

----------


## desiresjab

Since ideals at heart are instances of multiples, we always need to be able to see how any ideal is a multiple. Any time that becomes clear, we have understood the ideal. Even with Carmichael numbers the quest can be reduced to finding and understanding what this multiple is of. What is the generator and what does it leave in its tracks?

In the case of Complex numbers there have to be two generators, as I see it, one for the integral x-axis and one for the imaginary y-axis. Even if they do not show it in many diagrams, the y-axis is really the i-axis, 1i, 2i, 3i etc. An example would be the lattice diagram in the following link:

http://mathworld.wolfram.com/Ideal.html

----------


## desiresjab

> In the case of Complex numbers there have to be two generators, as I see it, one for the integral x-axis and one for the imaginary y-axis...http://mathworld.wolfram.com/Ideal.html


I am not too sure about my statement here. I mean, I still think there are two generators (are they called units, or not?), but I am not sure they are assigned to each axis. (1+i) is not even on an axis.

* * * * *

My oversight recently that not every Gaussian integer is a root of an equation, or something like that, was to due to a mix up with something I read. Even as I wrote it I knew it must be wrong, but I wrote it anyway because I thought I had read it on a long-time-without-sleep binge just prior to that. I must apologize for that embarrassment. I don't really know what I confused it with either.

* * * * *

On to the more interesting question of ways to factor 5 in the Complex numbers. 

We already know √4+i)(√4-i)=4-√4 i+√4 i+1=5.

To write (2+i)(2-i)=4-2 i+2i i+1=5, is essentially a trivial change from √4 to 2, so is cheating and not a valid different factorization. But how about this?:

(1+2i)(1-2i) = 1+-2i+2i-4(i)2 = 1+4 = 5 

*That is definitely a different factorization of 5*. Are there others? Maybe. I have not validated your claim yet. All it takes is the above to show lack of unique factorization. I am simply curious if there are more, or infinitely many, as you said, I believe.

Hmmmm... I think it may be the case that these factorizations display a type of symmetry that is important later on, where exchanging *a* and *b* in the Gaussian integer does not change the result of an equation containing them. What we did in the factorizations above is exchange *a* and *b*. 

But somehow I feel I can make it work for 7 as well, which is supposed to be a Gaussian prime. Let's take a look.

7 = √6+i)(√6-i)=6-√6 i+√6 i+1=7

Doesn't exchanging *a* and *b* have to work?

(1+√6i)(1-√6i)=1-√6i+√6i+6=7. Yes, it works.

Now I really am confused, I thought 7 was a Gaussian prime, but I have found two different ways of factoring it that seem distinct. The method should work on any Gaussian integer, in fact. These two factorizations do not seem trivially different. What is going on???????

----------


## desiresjab

Here is another factorization for 7.

(2+√3i)(2-√3i)=4+3=7.

Maybe those factors are not irreducible, so this factorization would not count. I do not know for sure. But there it is anyway, another factorization of 7. I could not find another one for 5.

----------


## YesNo

> On to the more interesting question of ways to factor 5 in the Complex numbers. 
> 
> We already know √4+i)(√4-i)=4-√4 i+√4 i+1=5.


There are infinitely many ways to factor 5 in the complex numbers. Let c be a complex number. Since the complex numbers are a field, 5/c = d is a complex number. Multiply both sides by c and get 5 = cd.

However, if one restricts attention to only the Gaussian integers https://en.wikipedia.org/wiki/Gaussian_integer, that is complex numbers like a + bi where a and b are in Z, then 5 should have a unique factorization into irreducibles (primes). 




> To write (2+i)(2-i)=4-2 i+2i i+1=5, is essentially a trivial change from √4 to 2, so is cheating and not a valid different factorization. But how about this?:
> 
> (1+2i)(1-2i) = 1+-2i+2i-4(i)2 = 1+4 = 5 
> 
> *That is definitely a different factorization of 5*. Are there others? Maybe. I have not validated your claim yet. All it takes is the above to show lack of unique factorization. I am simply curious if there are more, or infinitely many, as you said, I believe.


However, you seem to have two different factorizations above. I will have to check this further. Also, I am not sure why the Gaussian integers are a principal ideal domain which would make it be a unique factorization domain. So I will look up some proof for that as well.




> Hmmmm... I think it may be the case that these factorizations display a type of symmetry that is important later on, where exchanging *a* and *b* in the Gaussian integer does not change the result of an equation containing them. What we did in the factorizations above is exchange *a* and *b*. 
> 
> But somehow I feel I can make it work for 7 as well, which is supposed to be a Gaussian prime. Let's take a look.
> 
> 7 = √6+i)(√6-i)=6-√6 i+√6 i+1=7
> 
> Doesn't exchanging *a* and *b* have to work?
> 
> (1+√6i)(1-√6i)=1-√6i+√6i+6=7. Yes, it works.
> ...


In the case of 7, note that √6 is not an integer, that is, an element of Z. Therefore √6+i and √6-i are not Gaussian integers, but complex numbers. In the field of complex numbers 7 is a unit and everything divides it, but not in the ring of Gaussian integers.

----------


## desiresjab

> There are infinitely many ways to factor 5 in the complex numbers. Let c be a complex number. Since the complex numbers are a field, 5/c = d is a complex number. Multiply both sides by c and get 5 = cd.
> 
> However, if one restricts attention to only the Gaussian integers https://en.wikipedia.org/wiki/Gaussian_integer, that is complex numbers like a + bi where a and b are in Z, then 5 should have a unique factorization into irreducibles (primes). 
> 
> 
> 
> However, you seem to have two different factorizations above. I will have to check this further. Also, I am not sure why the Gaussian integers are a principal ideal domain which would make it be a unique factorization domain. So I will look up some proof for that as well.
> 
> 
> ...


Oh, yes, that is correct, they are complex numbers, not Gaussian integers.

----------


## desiresjab

On your next to last comment in your last post, I comment: All principal ideals are bsed on the _multiple_ concept, if I have my reading straight this time. I have a sneaking suspicion that there are ideals based on other properties than simply "_being a multiple of_." I have a hunch Carmichael numbers might be non-principal ideals. But I seem to be about 50-50 on the hunches these days.

----------


## YesNo

An ideal is also an additive subgroup. From that perspective, if there is more than one generator then one has to also consider not only the multiples of each of the generators, but also sums of anything generated from those two or more generators. One could generate the even numbers in the integers Z by using the principle ideal generated by 2 or the non-principal ideal generated by (2,4). However, those ideals are the same.

Your earlier observation about factoring 5 in two different ways still has me puzzled.

Edit: I think this resolves my earlier puzzlement:

The two factorizations of 5 provided earlier are the same up to units in the Gaussian integers. So unique factorization, up to units, still holds. 

To see the significance of this, look at 10 in the integers. This factors as 10 = 2*5 but also as 10 = (-2)*(-5). Those are two different factorizations but not up to units since if I multiply (-2)*(-5) by 1 = (-1)*(-1) then (-2)*(-5)=2*5. 

In the case of Gaussian integers there are four units: 1, -1, i, -i. If I multiply 1+2i by 1 = (-i)(i), I get (-i)(i)(1+2i) = i(-i+2) = i(2-i). If I multiply 1-2i by 1 = (-i)(i), I get (-i)(i)(1-2i) = -i(i+2) = -1(2+i). So with 1 = (i)(-i), I get 5 = (2+i)(2-i) = 1(2+i)(2-i) = (i)(-i)(2+i)(2-i) = i(1-2i)(2-i) = i(2-i)(1-2i) = (1+2i)(1-2i).

So the two factorizations are the same up to units.

----------


## desiresjab

All right, then.

* * * * *

(√5+i)(√5-i)=5-√5 i+√5 i+1=6

(√1+i)(√1-i)(√2+i)(√2-i)=6

We know the second factorization is not fully commutable.

* * * * *

These factorizations are *in* complex numbers. We can be *in* complex numbers, it is a number system. Are we ever _in_ Gaussian integers in that sense? I just see them as a complex number with the form xn-1, not as a number system. Are we ever really in Gaussian integers, other than to examine one or a few? They are just a subset of the complex numbers, are they not, perhaps part of some quadratic field?

----------


## YesNo

If we restrict ourselves to using only elements from the integers then we are in the integers and can see what algebraic structure exists there (commutative ring). The same thing goes for the Gaussian integers. Just use only Gaussian integers. In the integers or the Gaussian integers one can talk about unique factorization and primes. 

Now if we expand the set to include all the multiplicative inverses, we move from integers to rationals (or reals) or from Gaussian integers to complex numbers, then one loses the idea of prime and unique factorization, but one gets all those multiplicative inverses and the algebraic structure is different (field).

Alternatively one can think of elements in the set of Gaussian integers as not being complex numbers. This would justify having a different name for them even though they look the same and have many other properties of addition and multiplication in common.

----------


## desiresjab

Aren't Gaussian integers only a tiny slice (an infinite tiny slice) of the complex field? Only a few numbers are of the the form xn-1. Or is that xn+1? Christ!!!

Before, I was kind of thinking of Gaussian integers as just another name for Complex integers. I can see that is dead wrong. There are relatively so few Gaussian integers (in the sense of density) that I don't see what good they are. Anything recovered there would be for only a few numbers. In fact, the Gaussian integers seem really, really, really restricted.

I guess I will have to get used to tooling around in these various subsets and nail down their identities better. It turns out I do not yet know exactly where I am.

----------


## desiresjab

Wait! Wait! Wait! Wait! Wait! Where do I get my stupid notions from sometimes? I think from laziness, which looks at things too casually and without enough effort.

A Gaussian integer is simply a Complex number the coefficients of whose real _and_ imaginary parts are both rational, in fact both integers. They are not that sparse, then. They will fall right where integers fall on the Cartesian plane. And didn't Gauss himself say that a complex number whose both parts are rational is an integer? He manages to get much out of nothing. These are merely the complex numbers that are just the regular integers, aren't they, the ones able to shed the extra clothing of a complex number and look like a normal integer--well, behave like one, too?

I am seeing it much better now, actually. I seem to need rescuing from misconstruction more often these days as the iron gets heavier.

----------


## YesNo

Yes, the Gaussian integers are those complex numbers where both the real and imaginary parts are integers, such as, 3 + 5i or 250 - 26345i. 

I am studying Game Theory because my daughter is taking a class in it and she discusses it. Also, I've picked up a couple of books from the library on fractals. I want to see to what extent a market chart can be represented as a self-affine fractal and where that breaks down. The fractal should be random but the market chart is not according to socionomics although fractals can be used to model them.

----------


## desiresjab

> Yes, the Gaussian integers are those complex numbers where both the real and imaginary parts are integers, such as, 3 + 5i or 250 - 26345i. 
> 
> I am studying Game Theory because my daughter is taking a class in it and she discusses it. Also, I've picked up a couple of books from the library on fractals. I want to see to what extent a market chart can be represented as a self-affine fractal and where that breaks down. The fractal should be random but the market chart is not according to socionomics although fractals can be used to model them.


I love how complex functions act, I just do not like calculating them very much. It really is a job for computers, being more messy on every level. The Euclidian algorithm is more messy, too, and correspondingly easier to make a mistake in. I am not tempted to do many of these long-winded (ugly ) calculations. They are, in fact, almost precisely what I love to stay away from in math when I can and still understand. However, it is desirable to be able to distinguish Gaussian primes quickly from other numbers that might resemble one but are not. It could easily become necessary to get down in the mud of Gaussian integers and do the arithmetic in some context or other.

* * * * *

I have read a little on both Game Theory and Fractals. Can't help you, really. I do know stock prices were there right from the beginnings of the theory, however, as they were involved in some observations of a few pioneers, Lorentz among them, I believe. Now I remember, he happened to see a chart of historical cotton prices and noticed how similar it was to long term weather patterns.

----------


## YesNo

I think the Gaussian primes would be either primes from the integers with a remainder of 3 mod 4 or the factors in Gaussian integers of primes from the integers with a remainder of 1 mod 4. However, I don't know if that is true or not. That is, I can't think of how I would try to prove that at the moment, in particular prove that those two characteristics give all the Gaussian primes.

----------


## desiresjab

The 4n+3 primes come as they are. 4n+1 primes from the integers can be broken back to smaller prime factors. Even 2 is not prime in that world because it can be factored as (1+i)(1-i).

Does that mean there is no one-to-one correspondence between primes in the integers and primes in the Gaussian integers? 5 is made of two factors, for instance. The density of Gaussian primes must be slightly higher because of that.

----------


## YesNo

I think one could put the Gaussian primes into a one-to-one correspondence with the integers although I don't have a function that would do that at the moment. But that just counts them. There may be some other "measures" associated with them outside of quantity.

Here is some suggestion how to proceed constructing a one-to-one correspondence. First use the one-to-one correspondence between the integers and the rational numbers. Then map the Gaussian integer (or just prime), represented as a+bi, to the rational number represented as a/b. Those two one-to-one correspondences may be able to be composed in some way to get the desired one-to-one correspondence to show the primes and the Gaussian primes have the same countable infinity of elements.

----------


## desiresjab

If the normal primes have density 1, the Gaussian primes should have density 2.

----------


## YesNo

What is "density"? One might be able to look at that as the number of primes in some region. It reminds be of big-O estimates of the number of primes less than a certain number n.

----------


## desiresjab

Tired of coming on here to be told for no reason I am yet again barred.

----------


## desiresjab

There had ought to be 3/2 as many primes in the Gaussians as there are in the integers, since every 4n+1 prime in the integers has two prime factors in the Gaussians, is what I meant by density.

----------


## YesNo

I think one could define something like that density if one asked how many primes are less than the norm (or absolute value in integers). There might be more than 3/2 primes in the Gaussian integers less than a certain norm n since they are contained in a plane rather than just a line, but maybe not.

----------


## desiresjab

I just meant to give a rough idea that there are more, or at least seem to be.

* * * * *

Well, it is very interesting that ideals somehow manage to recover unique factorization (though only in some limited cases, according to Wildberger, he called them quadratic instances, or something like that), so now the aim has to be to produce a specific example of how ideals managed to recover uinique factorization. After that, it is on to Carmichael numbers and the understanding of how they have anything to do with ideals. That is the plan anyway and as far as I can see right now. Something always gets in the way of direct progress.

----------


## YesNo

I don't know how ideals do that either. That does seem like a major justification for considering them.

----------


## desiresjab

> I don't know how ideals do that either. That does seem like a major justification for considering them.


Ideals seem to eliminate the difference between 4n+1 primes and 4n+3 primes. That is, every ideal generated by a prime is a prime ideal regardless of its type to begin with. This is really cool, even if it only provides some unique factorization domains. The fact that any part of unique factorization has been recovered has got to be one of the greatest acheivements of mathematics.

----------


## YesNo

The Gaussian integers are a unique factorization domain already. So, the 4n+1 or 4n+3 primes are not a problem. We just get different primes than we might have expected in the Gaussian integers. 

But the integers with the sqrt(-5) are not a unique factorization domain. So here is where the ideals should help, but I don't see how at the moment.

----------


## desiresjab

> The Gaussian integers are a unique factorization domain already. So, the 4n+1 or 4n+3 primes are not a problem. We just get different primes than we might have expected in the Gaussian integers. 
> 
> But the integers with the sqrt(-5) are not a unique factorization domain. So here is where the ideals should help, but I don't see how at the moment.


In red, are you talking about something like 2+i and 2-i as primes?

There is a bit of mystery here. I am wondering why if the Gaussians are a UFD is there more than one way to factor numbers such as 6 or 5 within it? I can almost trust I am overlooking something.

Anyway, key things get remembered. Another one is 

_Quotinet rings divided through by a maximal ideal produce a field_. 

Since the field seems to consist of only 0 and 1, I can't see yet why that is so important, but it seems to be.

Here is another key to hold onto:

_Ideals are to rings as normal subgroups are to groups_.

That shouts: _Go study groups_, doesn't it?

I hope that is correct. There is a lot of talk of cosets, too, and I think that may be more group theory in spite of the name recalling set theory.

* * * * *

My experience with being slow tells me a clear understanding of ideals is in front of my face, unrecognized, while I acclimate my brain to something new. All these little brealkthroughs will eventually amount to sort of an ephiphany. 

The idea of principal ideals is pretty clear--they are just multiples. Like rings, ideals need 0 and 1 in the set--maybe some unit that stands in for 1, since the set of even numbers, for instance, cannot have a 1 in it.

Ideals are presented in additive terminology, though they have multiplicative properties.

I think this idea of "splitting," I keep running into refers to non-commutivity. I think this splitting is related to factorization problems. My present guess is that ideals manage to bypass this problem. I think this problem is related to the fact that 4n+3 primes are also primes in the Gaussians but 4n+1 primes are not, as they can be factored into smaller factors.

I hope I am not too amiss here. I am trying to put my collage together.

----------


## YesNo

> In red, are you talking about something like 2+i and 2-i as primes?
> 
> There is a bit of mystery here. I am wondering why if the Gaussians are a UFD is there more than one way to factor numbers such as 6 or 5 within it? I can almost trust I am overlooking something.


Up to units, there is only one way to factor 6 or 5 in the Gaussian integers. For example one can factor 6 = (3)(2) in Z. But one can also factor it as (-3)(-2). One could also reorder the factors as (2)(3). These are all different factorizations, but they are not what unique factorization tries to capture as an idea. The unique factors do not depend upon the order of the factors, nor do they depend on whether one can multiply the factors by 1 and get a different set of factors (associates). Again using 6 in Z, we can write 6 = (2)(3) = (1)(2)(3) = (-1)(-1)(2)(3) = (-2)(-3). In the Gaussian integers there are four units, not two as in Z: 1, -1, i, -1. Their norms are all 1. That challenges our normal intuition about what a unit should be (not just 1 or -1) and what an associate factor would be (not just multiplying the factor by -1).




> Anyway, key things get remembered. Another one is 
> 
> _Quotinet rings divided through by a maximal ideal produce a field_. 
> 
> Since the field seems to consist of only 0 and 1, I can't see yet why that is so important, but it seems to be.
> 
> Here is another key to hold onto:
> 
> _Ideals are to rings as normal subgroups are to groups_.
> ...


I am not clear about all of this either and I am finding it interesting to get a better understanding. In the Wikipedia article, https://en.wikipedia.org/wiki/Unique...ization_domain , there is a class chain:

_commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ finite fields_
There should be examples of non-unique factorization in structures to the left of "unique factorization domains" but none to the right. There should also be examples of a unique factorization domain that is not a principal ideal domain which is where the prime ideal questions we are discussing seem to be most important. But I am still unclear about how to formulate the questions.

----------


## desiresjab

Speaking of prime factorization in the Gaussians, you must be right. But in the Complex numbers is 2 even a prime? I can factor it as

(1+i)(1-i).

5 =(2+i)(2-i). Those are Gauusian integers, are they not (for a and b are both integers)? I believe those factors are not units, either. Does this make 5 not a prime in Gaussian integers. I believe no 4n+1 prime is a prime in the Gaussian integers, but I could be confusing Gaussians with the Complex numbers in general.

Now for 6 I really do not understand why 2x3 would be a prime factorization in the Gaussian integers, since I do not even believe 2 is a prime in that set. Isn't this the prime factorization?

(3)(1+i)(1-i)=6

What am I not getting about units for asking this question?

----------


## desiresjab

I have been looking around for an example of a non principal ideal. Of course it has been sitting in front of my nose.

All x+1 seem to form a non principal ideal. A non principal ideal is the kind I suspect Carmichael ideals to be.

----------


## YesNo

> Speaking of prime factorization in the Gaussians, you must be right. But in the Complex numbers is 2 even a prime? I can factor it as
> 
> (1+i)(1-i).


In the complex numbers there aren't any primes. All are units since every complex number has a multiplicative inverse. So 2 is not a prime in the complex numbers. It is a prime in the regular integers denoted by Z, but it is not in the Gaussian integers because of the factorization you mentioned above. In the Gaussian integers, unlike the complex numbers, 2 is not a unit because 1/2 = 1/2 + 0i does not exist in the Gaussian integers since 1/2 is not an integer in Z.




> 5 =(2+i)(2-i). Those are Gauusian integers, are they not (for a and b are both integers)? I believe those factors are not units, either. Does this make 5 not a prime in Gaussian integers. I believe no 4n+1 prime is a prime in the Gaussian integers, but I could be confusing Gaussians with the Complex numbers in general.


In the Gaussian integers you have factored 5 into two other Gaussian integers. So 5 is not a prime in the Gaussian integers. In the complex numbers 5 is not prime either, but that is because it is a unit. It has a multiplicative inverse 1/5 + 0i in the complex numbers, but that inverse is not in the Gaussian integers because both a and b in a + bi have to be normal integers that one has in Z.




> Now for 6 I really do not understand why 2x3 would be a prime factorization in the Gaussian integers, since I do not even believe 2 is a prime in that set. Isn't this the prime factorization?
> 
> (3)(1+i)(1-i)=6
> 
> What am I not getting about units for asking this question?


Right, in the Gaussian integers 6 factors as you mentioned. In the normal integers or Z, 6 = (2)(3). I was using that factorization to show what an associate was. In Z 2 has an associate -2. In the complex numbers 2 has -2, -2i and 2i as associates, since I just multiplied 2 by all the units in the Gaussian integers (1, -1, i, -i).

----------


## YesNo

> I have been looking around for an example of a non principal ideal. Of course it has been sitting in front of my nose.
> 
> All x+1 seem to form a non principal ideal. A non principal ideal is the kind I suspect Carmichael ideals to be.


A non-principal ideal for this polynomial ring over the integers would be (x, 2), that is, the ideal generated by x and 2. This is not the whole polynomial ring. That would be generated by a unit, say 1, but it contains those polynomials where the constant term must be even. It cannot be reduced to a principal ideal because then we would need some polynomial that divided both x and 2 (besides a unit like 1) so we could get x and 2 as a product. If such a thing existed it would be both an integer > 1 and a polynomial of first degree in x which is a contradiction. So this can't be reduced.

This link might be useful in terms of making sense how ideals resolve the problem of not having unique factorization. These are just notes used with a larger text that popped up from an internet seach and I don't know who the author is, but the first few pages seemed to summarize the problem: http://www2.math.ou.edu/~kmartin/nti/chap11.pdf

----------


## desiresjab

That link was helpful in that it knocked a few chips off a mountain of marble and raised more questions than it answered. I must say the notation and terminology is (to resort to Trumpian irreducibles) very bad; very, very bad, and the people who devised it were some of the worst people.

That is still my gripe about mathematics--the same notation and terminology mean different things in different areas of math, sometimes closely related areas! The job of revision is too big for me, but I may get stubborn and refer to ideals like this <1+√5> in the future, instead of with parentheses which are so easily confused with standard multiplication. Later on, the author of that paper admits the terminology and notation in ideal theory has evolved in an unfortunate manner.

I wonder how many graduate students have sat in upper level math classes praying for a concrete example using almost all numbers and a minimum number of letters in place of numbers, just so they can establish through a concrete example what the hell is being generalized, and how so, in the first place?

* * * * *

Some of the pieces I am chipping off are above my current understanding and stored away until such time as they fit into a total picture. The idea of division in ideals is completely queer to me as I view it presently. It doesn't even seem like division. Division in ideals seems like the reciprocal of division as I know it but not multiplication.

What I want to do is go back to a more basic level.

* * * * *

Back to 2Z, the good old even integers. Is this the set that forms the ideal of 2Z?

{0, 1, 2, 4, 6, 8,...}.

I am not sure if I have to notate it somehow as 0 and 1 adjoined to this infinite set of even numbers or not. From everything I have read I am led to think that

{0, 1, 2, 4, 6, 8,...}

would be the complete set of ideals for 2Z, 0 being the 0 ideal (additive identity), 1 being the unit (multiplicative identity), and 2 being the generator or principal ideal of the set. Each of the elements is called an ideal, 2 being the principal ideal.

So what is the entire set itself called? Is it called the set of ideals? If so, then I suppose the set of ideals of 2Z, or something like that? I am a bit confused on these several points, so I hope you can answer them.

Once I have these issues straight, I can move on to my next issues.

----------


## YesNo

I have to go to a Chinese New Year celebration, so I'll get back to you either this evening or tomorrow.

I agree that examples are always good. The simpler the better. 

The ideal (2) in the integers is the set of all even numbers, positive, negative and 0 with the operations that one has in the integers. Are you referring to the finite field containing two elements 0 and 1 which can be viewed as equivalence classes of even and odd integers?

Since an ideal is a set of elements from the ring with ring operations, division of these objects would be viewed as set inclusion.

----------


## desiresjab

We could speak in general of the ideals generated by the ring of integers Z, where Z is really 1Z where the 1 has been dropped. The numbers generated when 1 is used as the generator are just 1∙1, 1∙2, 1∙3, 1∙4,..., in other words just the full set of integers, which contains within itself all subsets of multiples of every integer, since those multiples are just integers, too.

The ideal 2Z is the set of all even integers, plus 1 and 0, because an ideal always has to have 1 and 0 in its set.

We could have 4Z, where the ideal would be all multiples of 4 from the integers, plus the elements 1 and 0 again, adjoined or however it might be expressed in the case of 0.

An ideal is not maximal if all the elements of the ideal are found in a larger subset of Z which is smaller than Z itself. In the case of 4Z, all integers which are multiples of 4 would be contained in the larger set of merely even numbers 2Z, which lies strictly between the multiples of 4 and the multiples of 1. Therefore 4Z is not a maximal ideal, because all its elements can be found in 2Z. However, 2Z is a maximal ideal because no subset smaller than Z itself "contains" it. This use of _contains_ actually means _divides_ in the language of ideals, according to the link recently posted by Yes/No.

It turns out that maximal ideals are generated by ordinary prime numbers, and only by them, no distinction being made between 4n+1 primes and 4n+3 primes.

I believe all maximal ideals are prime ideals, but 0 is a prime ideal, too, so all prime ideals are not maximal.

----------


## desiresjab

The ring of integers (mod p) where p is a prime is actually a field, a _finite field_. These fields are cyclic. I am not sure what precisely makes it a field instead of a ring, since I have been limping along under the impression that all four arithmetic operations had to be fully defined for fields. Well now, it occurs to me based on what Yes/No recently said about inverses, that as long as the modulus is prime every element of the residue system will have an inverse. Since every element has an inverse, that must be what makes it a field. They call that _every element being a unit_, which is a new one on me.

Under a modulus that is prime where every element has an inverse, I suppose that is enough to fully define division, which also would make finite fields under a modulus fields instead of rings. Somehow, I think they are fields and rings at the same time, if the definitions are not strictly mutually exclusive, which they could very well be.

I have been studying all day. I do not want to look this latest item up. Yes/No will know the answer. I will let him answer it. I want to watch movies right now.

----------


## desiresjab

Ah, yes, finally many things are coming together at once. Maximal ideals are indeed always prime ideals.

And, no, it is not impossible to be a Field and a Ring at the same time. A spoken proof sees that all Fields are Integral Domains, but the definition of Integral Domain explains it as a particular type of Ring.

The integers (mod p) where p is a prime, which we are so familiar with, forms a commutative ring where no two elements (residue classes) multipled together equals zero. The cyclic elements also form a Finite Field, and finite or not, all fields are integral domains, which are also rings.

In the language of ideals the integers mod (n) are called a Quotient Ring. This is what we are most familiar with in different language. Still, it is best not to lean too heavily on this particular understanding, for there is nothing cyclic about ideals in general. It does, however, apply because we would not be amiss in thinking of a quotient ring simply as the integers (mod n), where n is not necessarily a prime, as long as we do not forget that an object such as a Quotient Ring can be and is constructed not only from integers but from polyniomials as well, so that one has to remain aware of which type an article or portion of an article is alluding to.

* * * * *

Rings of polynomials are more difficult to deal with than integers algebraically. The dudes who developed the theory were high-powered intellects at the kind of old style algebraic stuff that mathematicians like Euler and Jacobi were known for--those long painful derivations and such that one amazes anyone ever unpuzzled for the first time. We have not gotten our hands dirty yet with manipulating polynomials. We might have to do that some time, or perhaps it can be avoided.

I hear other areas calling, so I want to get this one settled up as soon as I can. Combinatorics I already have pretty good basic experience and knowledge of in the sense that I played poker for many years and I was the kind who liked to calculate a lot of things out. Fancy counting is a load of fun, as I see it. I want to do more and see how it relates back to some of the stuff in number theory we have been looking at. We know for a fact that every group is a permutation group. I think Cayley proved that.

----------


## desiresjab

Yeah. Combinatorics, there is a lot more I want to learn there. In complex combinatorial situations, getting the logic just right is hairy. I love that stuff. Right now I am stuck here with ideals. But that is good. I only get stuck where I have chosen to get stuck already, for any one place that one settles in to confront higher mathematics there will be challenges that block progress for a while. Though I have in the last day made many connections that have avoided me, still I have a long way to go with ideals before I will be satisfied. The final capper will have to be an exposition of Carmichael ideals.

----------


## desiresjab

We can relax a bit now and give an entertaining question I remember from somewhere in the past that has to do with probability and game theory.

Three men with rifles are arranged in an equilateral triangle, all an equal distance apart. They are going to fire at each other in turn, until only one remains.

Participant A has a 90% accuracy rate, and participant B has a 70% rate. Poor C is a lousy shot, hits his target only 30% of the time, and is first to fire. What is his best strategy?

----------


## YesNo

> We could speak in general of the ideals generated by the ring of integers Z, where Z is really 1Z where the 1 has been dropped. The numbers generated when 1 is used as the generator are just 1∙1, 1∙2, 1∙3, 1∙4,..., in other words just the full set of integers, which contains within itself all subsets of multiples of every integer, since those multiples are just integers, too.
> 
> The ideal 2Z is the set of all even integers, plus 1 and 0, because an ideal always has to have 1 and 0 in its set.
> 
> We could have 4Z, where the ideal would be all multiples of 4 from the integers, plus the elements 1 and 0 again, adjoined or however it might be expressed in the case of 0.
> 
> An ideal is not maximal if all the elements of the ideal are found in a larger subset of Z which is smaller than Z itself. In the case of 4Z, all integers which are multiples of 4 would be contained in the larger set of merely even numbers 2Z, which lies strictly between the multiples of 4 and the multiples of 1. Therefore 4Z is not a maximal ideal, because all its elements can be found in 2Z. However, 2Z is a maximal ideal because no subset smaller than Z itself "contains" it. This use of _contains_ actually means _divides_ in the language of ideals, according to the link recently posted by Yes/No.
> 
> It turns out that maximal ideals are generated by ordinary prime numbers, and only by them, no distinction being made between 4n+1 primes and 4n+3 primes.
> ...


That is how I see it as well except for the part "an ideal always has to have 1 and 0 in its set". An ideal must have 0 in it since 0 is in the ring and so the generator times 0 will be 0. So 0 is in the ideal. However, if 1 were in the ideal, then the entire ring would be in the ideal and the ideal would be generated by 1. So, in general 1 would not be in the ideal, however, 0 must be there.

A field is a special kind of ring as you mentioned in later posts.

That's is an interesting game theory question. I don't know the answer. If I were C I would aim first at A. If I were A I would aim first at B. If I were B I would aim first at A. This would lead to a pure strategy for A, B and C for their first moves. But I wonder how to solve this in a more general way.

----------


## desiresjab

What you say about 1 not being in the set makes sense, however my mind believes it has read a hundred times that 1 must be there. It only has an additive identity without 1 there, no multiplicative identity. Commutative, one-sided ideals always have inverses, do they not? If 1 is not supposed to be there I need to clear it out mentally, but I need to understand how I have mistread a hundred times.

----------


## YesNo

I think a commutative ring must have both 0 and 1 in the ring. In Birkhoff and MacLane, "A Survey of Modern Algebra", one of the axioms of a commutative ring requires that there exists an element that is the multiplicative identity. That would be 1. The ring must also have the additive identity or 0. However an ideal need not be the full ring. For example the ideal in Z generated by 3 has all the multiples of 3. This would include 0 but not 1 which is not a multiple of 3. The ring must have 0 and 1, but not the ideal. 

The ring itself is one of the ideals of the ring, which might make this confusing. That ideal which equals the ring itself must contain 1 since that ideal contains everything in the ring. But all the other ideals do not contain 1.

----------


## desiresjab

> I think a commutative ring must have both 0 and 1 in the ring. In Birkhoff and MacLane, "A Survey of Modern Algebra", one of the axioms of a commutative ring requires that there exists an element that is the multiplicative identity. That would be 1. The ring must also have the additive identity or 0. However an ideal need not be the full ring. For example the ideal in Z generated by 3 has all the multiples of 3. This would include 0 but not 1 which is not a multiple of 3. The ring must have 0 and 1, but not the ideal. 
> 
> The ring itself is one of the ideals of the ring, which might make this confusing. That ideal which equals the ring itself must contain 1 since that ideal contains everything in the ring. But all the other ideals do not contain 1.


The following has a highly interesting opening paragraph.

https://en.wikipedia.org/wiki/Subring

Many (in fact the majority) of mathematicians seem to require that a ring contain a multiplicative inverse, but there are some who dispense with the notion. 

The next to last sentence in that paragraph is a killer. It says the subring may have a multiplicative identity that is different from the one for R. What the...?

Anyway, slowly this thing is making sense. I always find afterwards that I have read over the truth many times before it had enough meaning for me to be included in my picture. This theory is rife with details that mean everything. If so much information were not included in these precise abstract algebraic formulations mathematicians would not be able to handle quite complex statements with a few swipes of the chalk, as I have seen them do. The propositions at this level are literally crammed full of detailed information upon which the whole content rests. Unless you understand the details perfectly, you will get the content wrong. That is why I have to be so nit-picky right now and question everything I do not firmly understand and believe--not just until I receive the right answer from someone else, but until I understand perfectly for myself. So when I sto piss questioning of everything, you will know I have probably understood or died.

----------


## desiresjab

That short Wiki-peja article I linked to is full of powerful statements. Of course some of those powerful statements are not entirely clear at this point in the journey, but are quite intriguing as future references to lean on.

* * * * *

Your daughter might find interesting the answer to the question I posed about A, B and C shooting at each other in trun from an equal distance. I could not remember the exact numbers of the question, so used numbers that were safe. The general idea is there.

C shoots first with only a 30% chance of striking his target, against opponents who have respectively 90% and 70% chances of hitting their target at this distance.

C must fire into the ground. If he were to accidentally kill one of the opponents, the next shot would be fired at him from an opponent who is far more accurate. His best case scenario is that B (70% shooter) kills A (90% shooter) with the next shot. That way C gets to at least fire the first shot at his superior opponent.

----------


## YesNo

That makes sense that C should deliberately miss the target. I'll try to see how that game fits in with the game theory I've been reading about. 

A ring without a multiplicative identity would not fit Birkhoff and MacLane's assumptions, but it shows that there are other ways to organize these algebraic structures. They mentioned that the assumption of the existence of 1 not equal to 0 was to eliminate examples of rings consisting of only the 0 element. A ring, by their definition, has to have at least two elements: 0 and 1. If the ring doesn't have an identity then the ideals would be subrings as the article mentioned whereas I would not think of ideals as rings except in the trivial case where the ideal (generated by 1) equaled the whole ring.

----------


## desiresjab

There seems to be some confusion even among the experts as to what is called what in the theory.

I think I am getting fairly close to a decent understanding. It really helps to work out these little inconsistencies and get everything in its proper place.

One thing that makes ideals so fun to study is that the word is not used elsewhere in mathematics with a gross of other meanings. A brand new word for a brand new idea. One could improve mathematics considerably by combing through an English dictionary and finding suitable terms to replace those that are overworked to the point of ambiguity.

----------


## desiresjab

Oooh! Oooh! I got something else straight that has been mystifying me, and which shows my guess was wrong about "splitting fields" being connected with 4n+1 and 4n+3 primes.

Splitting refers to factoring a polynomial all the way down to linear factors. Even if you factor it you have not split it unless all the factors are linear.

A Splitting Field involves something called Field Extensions, which are literally what they say. Mathematicians take a field like the rational numbers and "Adjoin," enough non-rational numbers to it to enable them to Split a particular ploynomial or class of polynomials into linear factors. The trick is to Adjoin just enough numbers to the original set to get the job done, instead of ending up with a much larger set which indeed gets the job of splitting done but also contains many superfluous elements. You want to adjoin the minimum number of elements to the set that enables splitting. This involves a lot of complex techniques but the basic idea is not that hard, though I imagine the idea of adjoinment in this fashion was quite revolutionary when it was first proposed. The seeds may have originated in group theory.

----------


## YesNo

I don't know much about splitting fields, but the idea of adding only what one needs to the base field in order to factor a polynomial over the rationals sounds interesting. 

A pair of confusing terms for me is "prime" and "irreducible". I assume primes only exist in a unique factorization domain, otherwise what one gets are irreducibles which are as far as one can factor an object in the algebraic structure. At least there is a factorization even though it is not the only one.

----------


## desiresjab

> I don't know much about splitting fields, but the idea of adding only what one needs to the base field in order to factor a polynomial over the rationals sounds interesting. 
> 
> A pair of confusing terms for me is "prime" and "irreducible". I assume primes only exist in a unique factorization domain, otherwise what one gets are irreducibles which are as far as one can factor an object in the algebraic structure. At least there is a factorization even though it is not the only one.


Yes, one would think every irreducible would be a prime, but that maybe is not the case. I do not yet know enough to satisfy myself but I am running out of ideas of how to proceed. I have not seen a bit of use for primaries and semi-primes, but I suppose that lies ahead.

Once ideals are understood fully, that, I believe, is the major portion of higher arithmentic that occupied great minds in the 20th century and late 19th. Going beyond ideals may require a measure of inventiveness . A full understanding of Artin's work along with ideals would be much of what one needs. I am not shooting for Artin's work, though. He is too difficult. I would be nowhere near ready for Artin at this point.

----------


## desiresjab

Dr. Salomone talks about that in this video. This guy lectures at light speed, and I like that. Subjects other professors take 45 minutes to discuss he dispatches of in ten minutes. No wasted time.

According to him the only cases where irreducibility and primality do not coincide are cases that are special and "not nice." I notice such special cases are usually shoved back and relegated to a later timetable when one is supposedly more advanced. The class is here is Abstract Algebra II.

----------


## YesNo

I figure I won't understand anything fully, but some things will be understood enough. Eventually you should be able to read Artin's work. At the moment I don't think I would understand it either. But if we kept searching for clues, it should eventually make sense. I suspect it would take less time and effort to understand Artin than to understand Joyce's Finnegans Wake.

----------


## desiresjab

> I figure I won't understand anything fully, but some things will be understood enough. Eventually you should be able to read Artin's work. At the moment I don't think I would understand it either. But if we kept searching for clues, it should eventually make sense. I suspect it would take less time and effort to understand Artin than to understand Joyce's Finnegans Wake.


And this _is_ the road to Artin. It forks ahead, if anyone wants to go there. Never say never. The mathematician seeks isolated clues about the structure of the universe, the artist builds an artificial one out of the materials at hand. There is some connection. It is a long ways off for the mathematician.

The right junction could present itself. There may be a turnoff toward Brocard. For this reason squares and anything about them is good to pick up. There is so much juice of squares left in field extensions, I have to stay a while longer. I also need now to go back to that difficult paper you linked to a few weeks ago, and see if I can better understand the operation of multiplication of ideals presented there. 

I am getting more secure with ideals, and the idea of a subgroup within a ring that is only reachable through the subgroup itself, which precisely corresponds to what an ideal is, I hope.

Rings are to ideals as normal subgroups are to groups. Does this mean the _friendly_ rings referred to by Salomone are the _normal_ subgroups of group theory, as opposed to some other kind of subgroup that is not normal, like a non-commutative one, perhaps? I am going to stop guessing, but it is an addictive habit, and seems to serve me even when I am wrong, by keeping me asking questions until I am sure about something. Right now I am not sure what I am supposed to try to be sure about next, which is less fun than knowing where you should look, which is less fun than suspecting where you should look. At the moment I neither know or suspect.

----------


## desiresjab

I hope my old pal Yes/No is all right. It is not like him to leave posts unanswered. I wanted to say that division of ideals is not so challenging as it seemed at first glance. They call the ideal that does the dividing larger (as in terms of a larger set), for theoretcially its elements have greater density along the number line than the ideal it is dividing, such as 2Z dividing 4Z. The first may have more "elements," per unit distrance, but we realize both sets are actually infinite so there is no real difference in their cardinality but only in their density of occurence along the number line. However, it is plain to see that 2Z has a smaller generator than 4Z of which 4Z is a multiple, so naturally 2Z can divide it evenly.

As for multiplication of ideals, I am still trying to locate a paper I recently glanced at which explains it. 

If you multiply two elements from the ideal, the result ends up back in the ideal. The cool thing about ideals is that if you multiply one of its element times one of the elements in the ring that is not in the ideal, the result ends up back in the ideal, too. They call that "absorbing," multiplication. In the case of non-commutative ideals, multiplication will be absorbed from either the left or the right.

I now have to look at some details of multiplication, then actually multiply some ideals together to inspect the results, then I should be done with this part and ready to investigate the relationship of Carmichael numbers to ideals.

----------


## desiresjab

I myself must now be gone for a few days traveling. I will investigate ideals and multiplication before I go so I will have something to dwell on while I am away. May you all be here and in good health when I return.

----------


## YesNo

I think I missed your previous to last post. What you say makes sense to me. I don't know how ideals fit in with Carmichael numbers, but I heard they do in some way. I'll see if I can find out more about that.

Here's something on the multiplication of ideals that I thought was interesting:

This one shows that the product of two ideals is not their intersection, in particular, 2Z multiplied by 2Z is 4Z,but that is not the intersection of 2Z with itself: http://math.stackexchange.com/questi...e-intersection

That question also gives a definition of what multiplying two ideals means. Let I and J be ideals and ij the product of one element from I and one from J. Consider the set of all finite sums of these kinds of pairs. That would be the product of two ideas. It has to be finite since an infinite sum would likely take one out of the ring.

----------


## desiresjab

That is the problem. I do not see where 2Z or 4Z is finite. Element by element multiplication makes perfect sense for a finite number of elements. How do these two ideals (any two ideals) work out be finite? I do not see that.

----------


## YesNo

One restricts the sums to be over a finite number of products by definition. The definition makes it finite. So we have I = 2Z = {...-4,-2,0,2,4,...}. Suppose we want to multiply the ideal I by itself. We can construct a set that contains every product, 2m * 2n, where m and n are integers. This multiplies every element in I by another element in I. Now take any finite number of those products and add them together. This becomes an element in the ideal that is formed from the product of 2Z * 2Z. The smallest positive integer in that product of ideals would be 4 and that would be the generator of the ideal.  This is what we would expect because we get 2Z * 2Z = 4Z = {...-8,-4,0,4,8,...}.

Regarding Carmichael numbers the Wikipedia article says that the idea of Carmichael numbers is extended to other algebraic structures through ideals. Ideals don't help us solve problems about Carmichael numbers in the regular integers. So, ideals help generalize the idea of Carmichael numbers to other algebraic structures: https://en.wikipedia.org/wiki/Carmichael_number

----------


## desiresjab

Thanks again. I have to revisit an article to see if I can establish for a fact that the sum of two ideals is simply their GCD. That is very curious. Well, let me see. How about a concrete example? If I took two numbers such as 19 and 12. When I add them, their sum of 31 would actually be 1, in that case. So, yes, that seems curious. It seems modular. I can see where cycling back to 1 occurs in a cyclic ring such as a modulus, but how does that apply here? Of course the GCD of 19 and 12 separately is 1, so in that regard it makes sense. The notion here seems "not usual," and I will have to think about it some more and read some more.

----------


## YesNo

I think one defines the sum of two ideals as what one would get if one added one element of an ideal to an element of another ideal. So if (19) is an ideal in Z and (12) is an ideal in Z, what would be in their sum would be elements like 19a + 12b for integers a and b. 

Thinking about this as a greatest common divisor, since we know that the GCD of 19 and 12 is 1, we should be able to write an equation like this: 19x + 12y = 1 for some integers x and y. 

But now think of 19x as some element in (19) and 12y as some element in (12). This matches up the result of the GCD operation and the sum of two ideals.

----------


## desiresjab

More is becoming clear but the muddled parts still drive me nuttier. There are quite a few items in the following article which confuse me. For instance, I do not "see," why or understand statelments like

Z[√-5] is already the full ring of integers of its quotient field Q(√-5). The examples they use are always √-3 and √-5, because larger ones quickly start to become unweildly. We would like to understand the difference in behavior in this arena of 4n+1 and 4n+3 primes, for apparently there is one, is about what I could get out of page four through page five of

http://www2.math.ou.edu/~kmartin/nti/chap11.pdf

Though I always pick up additional incidental details I do not fully understand or which, on the other hand, make perfect sense to me.

----------


## YesNo

I don't understand this well enough at the moment. I will try to read the link again. I can see how Z[sqrt(-3)] can take advantage of ideals by calling a non-principal ideal an ideal number that doesn't exist in that set of integers and using that to get unique factorization, but that is as far as I see at the moment.

----------


## desiresjab

I am reporting from the site of another logjam. It surprised me to find out I did not know the *proper* procedure for generating the lattice on the Wolfram link:

http://mathworld.wolfram.com/Ideal.html

I can see too many ways to do it, I do not know which is the correct way. It seems to me from the theory I should be able to fill <2> and <1+i> in separately or as an active combination in <2, 1+i>. I can see how to get all the even numbers on the grid. I am not sure how to get the 1+i's beyond the inner group of 1+i, -1+i, etc. I may be more confused than I thought I was. I have cleared some path ahead but had to return to this.

----------


## YesNo

Since (1+i)*(1-i)=1+1=2, <1+i> should generate the same lattice as <2, 1+i>. To look at <2> separately, multiply it by all Gaussian integers in the visible part of the lattice by 2 and see where they lie. Certainly any even numbers on the real axis would be in the ideal as well as those on the complex axis. Some of the other points off the axes, but not all of them, should be present as well.

----------


## desiresjab

I am trying to see if I can treat <2, 1+i> like Cartesian coordinates.

Points on the a+bi axis (y-axis) are not clear to me. The red point in the upper right of the diagram looks like the Cartesian point (3, 3), but must have a different description in Gaussians. Since I do not even know what that point is I cannot figure out what to multiply by to get that point either.

----------


## desiresjab

Wait, I see my stupid mustake. The _y_-axis is not _a+bi_, it is the _bi_ part alone of the expression. With that bit of foolishness out of the way, getting the points down the right way might be easier. Still don't know if I can generate them all the proper way.

----------


## YesNo

Right. The x axis and the y axis would be the way one plots a real function, f(x) = y. The real axis and the complex axis would be how one plots a + bi. They have different names for these axes but both share in common the need to plot something in two dimensions.

There is a chapter in Birkhoff and MacLane on algebraic number fields that I hope will resolve some confusion I am having the Z[sqrt(-5)] and Z[sqrt(-3)] and how ideals help with unique factorization.

----------


## desiresjab

Okay, I have shown myself how to generate every point on the lattice through multiplication. These lattice diagrams may prove to be as germane to the study of ideals as Eisenstein's lattice diagram was for quadratic reciprocity. I want to make sure I get out of it everything there is to get, for I notice I can also take the additive approach and generate the same lattice, I believe. From any even number I can step 1+i or 1-i and get the remaining points, I mean. I still do not know if one way is the preferred way to see the lattice.

----------


## YesNo

I don't know if there is a preferred way to see this. Understanding something at all is all I aim for when looking at something I am unfamiliar with. However, finding different ways may lead to new results. That would imply a deeper understanding.

----------


## desiresjab

The lousy symbol processor on this site will not let me post again. I have a long post written. I do not feel like going through it to see what this system is objecting to, when I know it is the system again, not me.

This system uses brackets heavily for special functions, so whenever you use brackets in your post, especially in conjunction with a number, the system thinks you are trying to interfere with its propietary commands or something. Tired of it. These posts are hard to write, they include a lot of thinking.

----------


## desiresjab

What seems queer is I do not believe I could generate points like (1+3i) and (3+i) with 2 by itself, but I can generate them all with (1+i), which is not surprising, but the fact that I can apparently generate all the evens with (1+i) as well, is, somewhat, at least. Of course, I still have to have 2 in there a bunch of times as a multiplier to accomplish this when using (1+i) as the generator.

Sorry I am having to post piecemeal what would have all been in one post.

----------


## desiresjab

Not sure what all this means. I feel certain key ideas have to be seen with perfect clarity, and this is one of them. Every point on this lattice has to be accounted for, using the generators given. There might be (almost certainly are) other generators that would fill in the exact same lattice, but I am only concerned with these two generators right now, what they do, and especially how they do it, whether in tandem or in isolation. See what I'm sane?

Unfortunately, what I had to leave out were the actual multiplications which showed how I arrived at each point. Frustratring.

----------


## desiresjab

(2-i)(1+i)=2+2i-i-i2=2+i+1=3+i

----------


## desiresjab

Christ, to get 1+3i, multiply 1+i and 2+2i together. What is this garbage processor?

----------


## desiresjab

Some legal multiplications produce results outside the lattice. Hmmm. Such as:

(1+i)(2+2i)=2+2i+2i+2i2=4i

----------


## desiresjab

I think I know what you are trying to see. I do not know how to see it, either, but I believe I know what you are trying to see. The extension fields of √-5 and √-3. I cannot figure out what the devil they are talking about whent they speak of the primary difference between these two. It has to do with quotient fields and every integer of their qotient field already being there, at least for √-5, or some such thing.

Mathematicians have uglied up the quotient field thing real heavy with symbols. It may just mean this:

When you use an integer as your modulus over this complex field, your remainder often comes out with an imaginary piece. Well, technically it would always be there, but invisible when its constant was 0. A quotient field is somehow connected to this idea, I believe, but I do not know how and I certainly cannot prove or demonstrate it at this point.

The objects we are studying are becoming extremely abstract. It makes you appreciate the genius of the people who got there first with only their imaginations to guide them.

----------


## YesNo

> What seems queer is I do not believe I could generate points like (1+3i) and (3+i) with 2 by itself, but I can generate them all with (1+i), which is not surprising, but the fact that I can apparently generate all the evens with (1+i) as well, is, somewhat, at least. Of course, I still have to have 2 in there a bunch of times as a multiplier to accomplish this when using (1+i) as the generator.
> 
> Sorry I am having to post piecemeal what would have all been in one post.


You might try installing python through the anaconda distribution set. You can then create juypter notebooks and use mathjax which I think is close to LaTeX. We could share these notebooks.

The Gaussian integer 1+i should generate all the points on the lattice as you saw. Since 1+i divides 2, 2 should not generate all of the lattice points but only a subset of them. Although 4i is not on the portion of the lattice visible in the link, the lattice contains infinitely many points. It is just off the part that was shown.

----------


## desiresjab

Next, I sense we need to do some actual, ugly long-hand dividing in this territory to unlock some secrets and lighten some dark passage ways, but I do not even know how to start. We need the semblance of a problem to solve. Do you have a light?

----------


## YesNo

The way I would approach dividing is to construct the reciprocal and multiply. The reciprocal exists in the complex numbers but perhaps not in the Gaussian integers, so we can construct it. 

For example suppose I wanted to divide 2 by 1+i. I would write that as 2/(1+i). But 1/(1+i) = 1(1-i)/(1+i)(1-i) = (1-i)/2. I multiplied the denominator (1+i) by its conjugate (1-i). That will give me an integer in the denominator. If I do that to the denominator, I have to do it to the numerator. That is why I multiplied 1/(1+i) by (1-i)/(1-i)=1. Now I can multiply 2 by (1-i)/2 and get 1-i.

That may not be what you are looking for. Being able to formulate a problem even if one cannot solve the problem is valuable work.

----------


## desiresjab

Thank you.

I think I am seeking the tie-in between addition and multiplication in ideals. I find something slightly peculiar there.

We know I can generate the Wolfram lattice with just 1+i. We know I cannot generate that lattice with merely the ideal of 2. But 2 should give me all the even points on the lattice, as I see it. From any even point I am able to reach the "odd," points simply by adding 1+i to this even value, in other words it seems just like combining the two ideals through addition, and it appears to work. I do not know if this method is valid. It seems like it would have to be. It seems like it represents that tie-in I am looking for between addition and multiplication in this relam.

Above all, ideals seem to be additive objects. I believe but cannot prove that I have now demonstrated this relationship. I have generated all the points on the lattice two different ways (separately through addition & multiplication), and I believe they are both valid, not just a coincidence. Perhaps I am wrong, but you see now what I am looking for, whereas you might not have before.

----------


## desiresjab

This link you once provided may have been excerpted from a class using a John Stillwell book on the history of mathematics, which I understand has a large section on abstract algebra.

http://www2.math.ou.edu/~kmartin/nti/chap11.pdf

----------


## desiresjab

Though I am looking right at the examples in all these articles, I cannot yet see exactly how unique factorization has been recovered through ideals. I may be close, but not quite there yet. For me it may be a matter of tying together those definitions of ideals which depend on addition and those for muiltiplication. 

When I get as close as I sense I am now, I begin to believe the job will be completed. I think we shall lay ideals bare soon.

----------


## desiresjab

I have to travel for a few days. Right when I would rather stay home and study I am forced to go on the road.

The difference between primes and irreducibles is still a problem to untangle. The article linked to last says the *key idea* in working with ideals is that the irreducibles and the primes do not match up in that realm. Now that is in the article almost word for word. As is the fact that the sum of two ideals is their GCD. Those are their words not mine. You know what I'm sane?

----------


## YesNo

> Though I am looking right at the examples in all these articles, I cannot yet see exactly how unique factorization has been recovered through ideals. I may be close, but not quite there yet. For me it may be a matter of tying together those definitions of ideals which depend on addition and those for muiltiplication.


I am trying to make sense of that also, but I get distracted during the day. 

I agree with what you said about adding terms from both <2> and <1+i> to get the ideal <2, 1+i>. In this case the ideal is principal and can be written as <1+i> in the Gaussian integers.

If one looks at Z[sqrt(-3)] there is an ideal that looks similar: <2, 1+sqrt(-3)>. Note that instead of i, we have sqrt(-3). This ideal is not principal because 2 does not divide into 1+sqrt(-3) like 2 was able to divide into 1+i in the Gaussian integers and get a Gaussian integer back. So what I understand we are to do is consider the ideal <2, 1+sqrt(-3)> as a new ideal number that we will add to Z[sqrt(-3)]. That is where the ideals help. That's how I understand it at the moment, but I might change my mind. This approach doesn't help with Z[sqrt(-5)] and that is where I am puzzled at the moment.

----------


## desiresjab

> I am trying to make sense of that also, but I get distracted during the day. 
> 
> I agree with what you said about adding terms from both <2> and <1+i> to get the ideal <2, 1+i>. In this case the ideal is principal and can be written as <1+i> in the Gaussian integers.
> 
> If one looks at Z[sqrt(-3)] there is an ideal that looks similar: <2, 1+sqrt(-3)>. Note that instead of i, we have sqrt(-3). This ideal is not principal because 2 does not divide into 1+sqrt(-3) like 2 was able to divide into 1+i in the Gaussian integers and get a Gaussian integer back. So what I understand we are to do is consider the ideal <2, 1+sqrt(-3)> as a new ideal number that we will add to Z[sqrt(-3)]. That is where the ideals help. That's how I understand it at the moment, but I might change my mind. This approach doesn't help with Z[sqrt(-5)] and that is where I am puzzled at the moment.


Of course the notion of 2 dividing into 1+i is odd anyway. It doesn't fit with our intuition. We could much easier appreciate how 1+i divides into 2. But with the peculiar definition of division in ideals we have 2 dividing 1+i.

I think better with my computer nearby. I will take a couple of days to try and bring everything together.

----------


## YesNo

I think I stated that wrong. 1+i divides 2 since (1+i)(1-i)=2. However, the ideal <2, 1+sqrt(-3)>in Z[sqrt(-3)] is not principal but it is prime or irreducible. 

Using non-principal, but irreducible ideals is the way that ideals recovered unique factorization in those Dedekind domains that otherwise did not have unique factorization. There's something called the Fundamental Theorem of Ideal Theory that states that those Dedekind domains have unique factorization using ideals. I am trying to understand that proof. The Birkoff and MacLane book did not cover the proof and so I am reading Harry Pollard's "The Theory of Algebraic Numbers" to try to understand it better.

----------


## desiresjab

What I am finding is that whether a term is irreducible or prime seems to depend on the domain itself. For instance, in Z there does not seem to be any difference between irreducible and prime elements. One must always be aware if they are in an I.D, a P.I.D or a field to know these things. As I understand it, the definition of prime as we are familiar with it, is exactly the definition that suffices for irreducibles in ideal theory, but our familiar definition of primes is not sufficient for ideals.

There is also a theorem of Hilbert which converts any non-principal ideal into a principal one through multiplication of the ideal by a special Hilbert number. No details on this one yet.

Well, one thing we can say for sure is that a maximal ideal is always a prime ideal. Maximal ideals are fairly easy to get a handle on, thank Gog. I am unable to determine if maximal ideals include all of the prime ideals. I do not think so. So far, I believe there are other prime ideals which are not maximal but, of course, no maximal ideals which are not prime.

Sounds like you are really digging into the subject now. Some major insights must be on their way to you.

----------


## YesNo

Whatever insights I am getting have been discovered long ago. I am just sorting through the puzzle. I find this wikipedia page on Dedekind domains interesting at the moment: https://en.wikipedia.org/wiki/Dedekind_domain

Dedekind domains are integral domains in which there is unique factorization of ideals even though there may not be unique factorization of elements themselves. What that suggests to me is that there should be an example of a ring that is not a Dedekind domain, that is, where unique factorization of ideals does not work.

----------


## YesNo

Many insights are obvious after they are discovered. Then one wonders why one couldn't see them before. Here's one obvious insight that just recently became obvious to me.

Consider the greatest common divisor, g, of two integers, a and b. Given g, one can find two other integers x and y such that g = ax + by. Note the linear combination of a and b. If one considers all such combinations of a and b one has an ideal generated by a and b or <a, b>. That ideal is not principal, but because the integers are a principal ideal domain, one can find a single generator for <a, b> which would be g forming the principal ideal <g>.

I found on my bookshelves a translation of Dedekind's "Theory of Algebraic Integers" translated by John Stillwell. That insight about the gcd I mentioned above came from Stillwell's introduction, page 7. I forgot I even had that book and now for the first time, thanks to your discussion of these issues, desiresjab, I might actually finish reading it.

----------


## desiresjab

This is great. I urgently need all my concentration now to take steps in quicksand.

I believe Stillwell is the guy whose book (which you may be reading) was being used as the trext for the course taught here:

http://www2.math.ou.edu/~kmartin/nti/chap11.pdf

I am amazed by how many details I can pick up without the whole thing falling into place before my eyes. Sometimes even stubborn illusions are replaced by local understanding unable to force global epiphany.

Quite a few hurdles are terminological. When one finally receives the right information in the right form, the information usually sticks to the term like glue in the mind, but it often takes a long time for that event to happen--too long, for my tastes. I am dissatisfied with my mind's ability to gobble up these ideas like a young hen picking up corn. I would rather have a greater mind to work with than this old clunker.

* * * * *

There are three or four issues which if I could resolve, I would have a decent grip on ideals, unique factorization and field extensions, and precisely how they all fit together and what in concert they have acheived.

Another barricade is that much of ideal theory is expressed in group theoretical notation with groups leading the idea train. From the beginning ideals had deep connections with group theory and was developed with them in mind. I do not know that subject well enough to catch all the hints.

----------


## YesNo

The way I avoid quicksand is to sleep on it, stop reading, skim or switch to some other text when the one I'm reading becomes too difficult. Of course, that means I might never go back to the original text which is what happened with that Dedekind book long ago. I have only read Stillwell's introduction, but he has a very good style. The parts I understand are clear. The parts I don't understand it is probably my fault that I don't understand them. 

Another insight that is now obvious to me is that when one extends the rationals Q with sqrt(-1) = i to get Q(i) one gets a field smaller than the complex numbers. It doesn't even contain all the real numbers. For example, it doesn't include any transcendental numbers such as pi or e because they aren't in Q and they can't be formed from a + bi where a and b are in Q. But when I see something written as a + bi I assume I am working in the complex numbers when actually I am working in a subfield of the complex numbers.

----------


## desiresjab

The only number in the whole field which is not rational is the field extension itself. But I must confess I am at a total loss understanding the difference between extending to sqrt (-3) and extending by √-3i or something like that. I truly do not understand this difference or whatever advantage might accrue to it. This issue is one of the three or four items I still must bring under control. You addressed it once but I still could not understand.

----------


## desiresjab

Another thing which still throws me is--it is just like me these days to forget what it was before I have the sentence finished, dammit!

Oh, the other thing which confuses me is how 1+√-3 _et al_ got into the picture at all. We were considering √-3, for instance, and suddenly here we are with 1+√-3. What justifies that 1 out in front? I understand it helps because it has a conjugate and I am not certain if mere √-3 does. Still, I have a hard time justifying its presence. Do you understand what the justification is?

----------


## desiresjab

Suddenly, I think I see how a Quotient Ring works.

Take the polynomial x2+3=0

If we subtract x2+3 from any other polynomial, the difference either is or is not a multiple of x2+3. If it is a multiple, we call that result equivalent to x2+3 itself. We may even call it equivalent to the second polynomial. I am not sure about that point. Probably not, actually. Well, hmmm..., I don't know. More thought.

This is really only modular arithmetic using x2+3 as the modulus.

----------


## desiresjab

If I take (x2+4)-(x2+3)=1,

that seems to imply that to me that 1 and x2+3 are equivalent, since it does seem that everything (and this includes x2+3) is a multiple of 1.

Yet it definitely puts x2+4 in the residue class (equivalence class?) of 1, where x2+3 is in the 0 class mod itself, so it is hard for me to see them as equivalent. 

The above indicates to me that 1 and x2+4 are in the same residue class (equivalence class?), not that 1 and x2+3 are, which is impossible when x2+3 is the modulus.

It would be possible for polynomials A and B to be in the same residue class. In this case their difference would be, too. A, B and B-A would all have the same residue class.

Each residue class (equivalence class?) must form an ideal. All polynomials leaving a residue of 1, for instance, would form an ideal. All polynomials leaving a residue of 2 would form another ideal, etc. All polynomials leaving a residue of technically 0 would be in the same class as x2+3.

----------


## YesNo

> Another thing which still throws me is--it is just like me these days to forget what it was before I have the sentence finished, dammit!
> 
> Oh, the other thing which confuses me is how 1+√-3 _et al_ got into the picture at all. We were considering √-3, for instance, and suddenly here we are with 1+√-3. What justifies that 1 out in front? I understand it helps because it has a conjugate and I am not certain if mere √-3 does. Still, I have a hard time justifying its presence. Do you understand what the justification is?


I think sqrt(-3) = sqrt(-1)*sqrt(3) = i*sqrt(3). 

The a + b*sqrt(-3) has two terms because it is generated by 1 and sqrt(-3) over the rationals. So one has a linear combination of both generators. If we let a and b be in Q, then a*1 + b*sqrt(-3) can take any value in the field extension Q(sqrt(-3)).

----------


## YesNo

> If I take (x2+4)-(x2+3)=1,
> 
> that seems to imply that to me that 1 and x2+3 are equivalent, since it does seem that everything (and this includes x2+3) is a multiple of 1.
> 
> Yet it definitely puts x2+4 in the residue class (equivalence class?) of 1, where x2+3 is in the 0 class mod itself, so it is hard for me to see them as equivalent. 
> 
> The above indicates to me that 1 and x2+4 are in the same residue class (equivalence class?), not that 1 and x2+3 are, which is impossible when x2+3 is the modulus.
> 
> It would be possible for polynomials A and B to be in the same residue class. In this case their difference would be, too. A, B and B-A would all have the same residue class.
> ...


The equation at the top is an identity, so it should work for any x. However a polynomial is 0 for only a few roots. 

If x2+3 is the modulus then any polynomial times that one have those two roots in common. It is like a prime, say 3. Any integer times 3 would be in the ideal generated by 3.

----------


## desiresjab

Okay. I am back. I think I see everything so far except the fundamental difference between √-5 and √-3 and why a different treatment is necessary, since we were missing the treatment of √-3 in the article we have linked to several times. That excerpt speaks of that treatment but never shows it, though it gives a hint for the enlightened.

----------


## YesNo

As I see it now, I don't think the treatment from the perspective of ideals is different. In both cases, when we look at ideals the ideals factor into prime ideals (although those prime ideas are not principal ideals because they have more than one generator). What makes a prime is the property, going back to Euclid, that if a prime p (from whatever ring) divides a product, ab, then p must divide either a or p must divide b. The prime is not allowed to divide some of each as 6 dividing 2*3 does. 

When we look at elements of the ring of integers they don't factor into primes. I still don't understand the fundamental theorem of ideal theory so I don't see why unique factorization has to work yet. The examples showing that it does work can be checked, but they are just examples.

There is that other difference in the original article we looked at. I think it referred to extending the Z[sqrt(-3)] ring, however, I don't see it at the moment.

----------


## desiresjab

On the difference between treatments of 3 and 5 in the second paragraph of the article, I notice that 5 can be factored in Gaussian integers without the use of a √ sign, and that 3 cannot be, and I wonder if this has any bearing on their primary difference, which I suppose must ultimately be related to the fact that 3 is a Gaussian prime and 5 is not. 

We also know that Gaussian primes cannot be represented as the sum of two squares. That means 4n+3 type primes in the integers.

What I am still trying to figure out is what the article means when they say that the quotient field of Z[√5] already contains all the integers under it, implying that the quotient field of Z[√3] does not. And then the article says that is why they cannot simply add more numbers to the extension field of √5 they way they did for √3. Of course that mysterious work comes in a previous part of the book we did not get to see.

Could it be that the failure of 3 to factor nicely without stepping out of Gaussian integers the way 5 does into (2+i)(2-i) is involved?

I am sure there is a simple division process to demonstrate this. I am not sure how to do it. But I do have a crude idea.

If the √ of anything is part of the quotient field, I see how dividing by it could mess things up. Not too helpful, I know, but a small insight. You would not get nice integers anymore.

It hadn't ought to be that hard to construct quotient fields for 3 and 5, should it, and demonstrate how the latter already contains all the integers under it and the former does not?

----------


## YesNo

I am having trouble understand that same distinction mentioned in the article. Dedekind introduced the language of ideals, but Kummer came before him and introduced ideal numbers. I think they are referring to Kummer's approach rather than Dedekind's, but I don't know yet. The sqrt(-5) is supposedly the critical one leading to the results in algebraic number theory, not sqrt(-3), but I don't see it yet.

----------


## desiresjab

Later on in the article, they say on page 4, ex. 11.5:

Z[√-3] is not a Dedekind domain, (which I believe Z[√-5] is). It is not a Dedekind domain because it does not contain every integer in its quotient field.

It appears from the article we may have to pass to the ring of ξ, or whatever that strange symbol is they keep using. This seems to be another level of abstraction to tackle. Happily, it may be the last great hurdle on this particular journey.

----------


## YesNo

I see that part. I am puzzled by it. I would have thought that Z[√-3] would have all the integers from Q[√-3]. Apparently it is not "integrally closed". That means there must be "integers" in Q[√-3] that are not in Z[√-3]. It looks like one also has to include (1+√-3)/2 = (1/2) + (1/2)√-3. But 1/2 is not an integer in Z.

----------


## desiresjab

Whatever is in the quotient fileds of √-3 and √-5 only gets there after the proper division, right? Maybe there _are_ some "objects," in the quotient field of Z[[-3], but what the article says is that Z[√-5] contains all the integers in its quotient field, suggesting that Z[√-3] does not. We have exactly the same problem at this point.

If x2+3 is our polynomial, it is also our quotient field modulus, isn't it, or do we simply use √-3?

This only goes to show how great a paucity there is of clear, specific examples in this literature. The phd candidates who write these posted papers would not dare insult those judging their dissertations by anything so lowly and nasty as specific numerical examples all worked out here and there.

Mathematicians write only for those who are 100% up on the language. Even professors do this teaching their courses. They expect everyone to already understand and be intimately familiar with whatever is being studied. Good examples are for neophytes.

I have aired this gripe before. Of course most math articles are not written with me in mind. Nonetheless, I think mathematicians are generally bad writers and bad teachers.

----------


## YesNo

I agree it isn't clear. I did find a Wikipedia article that is making sense to me at the moment although it is not completely clear: https://en.wikipedia.org/wiki/Quadratic_integer

The quadratic integers are roots of monic quadratic equations. Looking at Q(sqrt(-3)) we can find the ring of integers OQ(sqrt(-3)) which are integrally closed because we take all quadratic integers from Q(sqrt(-3)), but this is larger than the ring Z[sqrt(-3)]. There we are starting with the ring Z not the field Q when we make the extension. I think that is where part of the confusion comes from. What we need to extend the rational integers Z by is not sqrt(-3) but (1+sqrt(-3))/2 to get all the roots of all monic equations where only the sqrt(-3) appears as an algebraic number. However if we are looking at a field like the rationals Q, which is the field of fractions of Z, we can simplify this to Q(sqrt(-3)) because 1/2 is in Q already. It is not in Z unless we put it in there some how.

The monic quadratic equations containing sqrt(-3) in the root somewhere would be x2 + 3 = 0 and x2 + x + 1 = 0. The second is a factor of x3 - 1 making that root one of the cubic roots of unity.

The challenge is to understand the concepts and then maybe write a clearer exposition.

Edit: Here's another link. I like the first answer and the way the question was formed: http://math.stackexchange.com/questi...ed-in-that-way

----------


## desiresjab

That article seems to have about everything in it that is needed for understanding the subject the way I would like to. It is a very tough piece. I have stayed away from it because it hurts my brain and I have not been feeling that well. But now I feel okay and I need to go after it. The fine points are somewhat explained. But of course there are always things you wish they had cleared up in any article on the subject.

In order to properly understand ideals all these related areas like field extensions and polynomial rings must be comprehended. So in the end it is a much bigger task to learn than quadratic reciprocity was.

----------


## YesNo

In a sense the task is large, but like a huge jigsaw puzzle. Once one gets a piece in place, it's there. Remembering is easier after one has forgotten it.

My confusion with the earlier link was that it talked about Z[sqrt(-3)], but that is an example of a ring that is not a Dedekind domain. It shows that they exist. One needs to start with Q, the rationals. Then note that Z is the ring of integers, OQ, in Q. An algebraic integer in an algebraic number field is the root of a monic polynomial with integer coefficients. In the case of Q, the monic polynomial is x - n = 0 where n is an integer. We get the expected ring of integers, Z.

----------


## desiresjab

What is true for 3 and -3 is true for all 4n+3 type primes, correct? That would make things a bit easier to organize.

----------


## YesNo

The -3 is a 4n+1 type. If you add 4 to -3 you get 1. The quadratic algebraic integers, those that are roots of monic polynomials of degree 2, separate into two groups depending on mod 4 (except for 2 which is handled separately). 

Two things come to mind: 
(1) How does one tell which are the integers in a quadratic number field, like Q(sqrt(-3)) or Q(sqrt(-5))? 
(2) Is the ring of integers, such as OQ(sqrt(-3)), a unique factorization domain? 

The first question is resolved by checking if the number is congruent to 1 or 3 mod 4. The second is resolved by using ideals, so unique factorization is possible in the Dedekind domains with ideals. I think the number of UFDs for algebraic numbers that are square roots of negative integers are finite, if one does not use ideals. I will have to check that.

----------


## desiresjab

Yes, only 3 is a 4n+3 type.

I am confused by this notation: OQ(sqrt(-3)).

Can you say it in English, please?

* * * * *

What I meant to say is that what is true of one 4n+3 number is true of another. Rather, I meant to ask if that is strictly true.

----------


## YesNo

The OQ(sqrt(-3)) is the notation for the ring of integers from the field of algebraic numbers Q(sqrt(-3)). I think the O notation comes from Dedekind. Sometimes that ring of integers can be represented by something like Z[sqrt(-3)], which is a ring of a + b*sqrt(-3) where a and b are in Z or where a and be are what we normally think of as integers, or "rational integers". Sometimes it is not, as in this particular case, which is why I think Z[sqrt(-3)] was used as an example. Because -3 is congruent to 1 mod 4, we also have integers that look like this (a + b*sqrt(-3))/2. So the notation using O gives the ring of integers from a field which could be different from just extending Z by the same algebraic number. In the case of Q, the rationals, OQ is just Z. So for the field Q we don't need new notation because there is no difference from what we would expect the ring of integers to be.

One other notation that can be confusing: If one is extending a field the notion of "vector space" is used which is a special kind of "module" and one uses parentheses, such as, Q(sqrt(-3)). What makes it special is Q has all of its inverses. If one is extending a general ring (not necessarily a field) one uses the more general concept of a "module" and the notation changes to brackets such as Z[sqrt(-3)]. Here's some discussion of that difference: https://www.quora.com/What-is-the-di...le-over-a-ring

----------


## desiresjab

Mod 3 there is no object which when squared equals -1 (mod 3), for -1 is equivalent to 2 (mod 3).

Mod 5 the case is different. We do not even have to adjoin √-1 to our ring, because -1 is already there, since (√-1)2 is equal to 4.

This simple truth hung me up for quite a while. 

When something snags me I find it almost impossible to move foreward until I have resolved the problem. Finally I have resolved this one, partly at least. I think you are well beyond me now, I hope not out of yelling distance. A new plateau of understanding may now pour over me quickly.

The article said its quotient field contained all numbers within itself. There is nothing I can square (mod 5) to get 3, however, so it still appears to matter what one is trying to adjoin.

----------


## YesNo

I think it is only mod 4 that is of interest here, not mod 3 or mod 5. 

For quadratic algebraic number fields, that is fields where the numbers are roots of quadratic equations like ax2+bx+c=0, the way to tell what algebraic numbers are integers is given by the rule that if we extend Q by the square root of a negative integer, then if that negative integer, -t, is congruent to 3 mod 4 then the algebraic integers are what we would expect them to be, numbers like a+b*sqrt(-t). If -t is congruent to 1 mod 4 then we have to also include a/2+b/2*sqrt(-t) as integers. This comes from using the quadratic formula to find x = (-b+-sqrt(b2-4ac)/2a. That 2 in the denominator does not cancel out in this case. The a = 1 because that is required for an algebraic number to be an algebraic integer. It has to be monic.

Don't worry about being out of reach. If I can't explain it, then I don't really understand it well enough and I don't understand this myself all that well. Also, I might have some of this wrong.

----------


## desiresjab

I have been traveling again, and I will have to do even more later this week or early next. While I am away I have no computer to do research on, so I concentrate on organizing as many details as I can remember on these math subjects, or what I can put down on paper.

Ideals and field extensions are parts of the classical theory of pure mathematics. I seriously doubt that ideals have found "many," applications outside of conducting more pure research. Also I would be only slightly more surprised if ideals had not already found "some," applications outside of pure mathematics.

Almost nothing is known about fields beyond quadratic fields. It was only for some quadratic fields that ideals recovered unique factorization. In the future I expect much more to be known about higher order fields. Perhaps the complexity will eventually be unraveled by quantum computers. At such a time more applications would come into being. Some of them might be more reliable and secure encryption techniques, as happened with the congruence theory of Gauss, a math language which did not find its big application outside of pure mathematics for a full 200 years.

Right now I feel like tackling more math. The harder it gets, the less I feel like diving in, so I had better take advantage of every time I feel like pushing my boundaries.

----------


## YesNo

There is probably a lot that is not known also because the questions haven't been asked, but I don't know what the limits are. I ran into this article on "monogenic" fields which have an example of a cubic field that is not monogenic: https://en.wikipedia.org/wiki/Monogenic_field 

So here is another technical term, "monogenic", and also, "power integral basis". These terms are more pieces in the jigsaw puzzle.

----------


## desiresjab

> I think it is only mod 4 that is of interest here, not mod 3 or mod 5. 
> 
> For quadratic algebraic number fields, that is fields where the numbers are roots of quadratic equations like ax2+bx+c=0, the way to tell what algebraic numbers are integers is given by the rule that if we extend Q by the square root of a negative integer, then if that negative integer, -t, is congruent to 3 mod 4 then the algebraic integers are what we would expect them to be, numbers like a+b*sqrt(-t). If -t is congruent to 1 mod 4 then we have to also include a/2+b/2*sqrt(-t) as integers. This comes from using the quadratic formula to find x = (-b+-sqrt(b2-4ac)/2a. That 2 in the denominator does not cancel out in this case. The a = 1 because that is required for an algebraic number to be an algebraic integer.  It has to be monic.
> 
> Don't worry about being out of reach. If I can't explain it, then I don't really understand it well enough and I don't understand this myself all that well. Also, I might have some of this wrong.


What I need are a few specific examples worked out with all the algebra. If I can see it just once I can figure out how the discriminant figures into this. The problem with any examples I have seen is that they suddenly introduce new variables to make the job harder. Often I cannot see which step to take. For instance, when they say it is obvious so and so is the minimum equation for so and so, I will not see why or how they got the answer.

A few examples geared just for me would make everything clear, I am convinced, but that luxury usually does not exist in math.

Of course I understand that if the discriminant is negative then we intorduce the complex numbers through a field extension. I cannot reproduce the algebra involved. I would have to see it once.

----------


## YesNo

Here is a video describing the discriminant for a quadratic equation: http://www.virtualnerd.com/algebra-1...ant-definition You may already know this. 

If the square root of the discriminant is not an integer, then the root (which could be either a real or a non-real complex number) is not a rational number. That is, it is not in the field of rational numbers, Q. We could extend this field by creating an object that has all of Q plus one of these non-rational roots. That would also be a field, and its structure would be a two-dimensional vector space. One of the dimensions would have 1 as the base and the other would have this non-rational root, r. The base would look like this: (1,r). The field extension of Q is written like this Q(r). If a and b are arbitrary rational numbers, that is elements of Q, then a number in this new field would look like this: a + br. 

Now suppose we don't extend Q but instead extend Z, the ring of integers of Q. Z is not a field, but the extension would be similar and called a module. This is not always a Dedekind domain with ideals forming a unique factorization domain. However, we still could extend Z and see what we get. If we are extending a ring that is not a field the notation would be modified to Z[r], but the idea of the extension is the same. If a and b are integers, that is elements of Z, then a + br would be in Z[r].

Since the ring of integers OQ(r) may be larger than Z[r], we need some way to tell when they are not the same. The discriminant does this. If the discriminate is congruent to 1 mod 4, then we have to include the root that has a 2 in the denominator which is what the earlier link referenced when it discussed Z[sqrt(-3)].

----------


## desiresjab

Very good. It seems like I have most of the theory under control. Cannot look at the link yet, for I must travel again. Yes, I am no stranger to the quadratic formula. I just have to see exactly how it fits into all this business.

There is always some unpleasant algebra if one wants to view these things in detail. That will be my last step. We are almost done with ideals and the whole 19th century business of settling the theory of equations. Be back in four or five days this time.

----------


## YesNo

You probably know what is in the link. It just talks about the discriminant for a quadratic polynomial. However, there are a lot of questions. It is worthwhile solving questions at the end of chapters in a text book. I have been reading Saul Stahl, "Introductory Modern Algebra: A Historical Approach". It is an undergraduate level survey of algebra with many questions. If we get a common text with problems that might be a way to go deeper into this subject. What books are you reading? 

Here's a question I found interesting about quadratic equations associated with the idea of "algebraic expressions", that is, the ability to write the roots of an equation as an algebraic expression using the coefficients of the equation: Given two numbers, r and s, and the quadratic equation, x2-(r+s)x+rs = 0, show that r and s are the two roots of that equation.

----------


## desiresjab

We know we are on the lookout for UFD's (Unique Factorization Domains). It is nice to know then that all PID's (Principal Ideal Domains) are UFD's and all ideals in *Z* are Principal, for nice consolidation.

Irreducible ideals and prime ideals are not always the same in ideal language, but irreducible maximal ideals are always prime ideals.

Without becoming a grinding algorist, one can then follow the language of ideals as written in math the way a non-native pidgin speaker pieces together a newspaper bulletin in a foreign language.

----------


## desiresjab

The last paragraph of your second to the last post spills a lot of light on the significance of the discriminant in the theory: It is the discriminant itself whose value (mod 4) we are looking at to make our determination of what can be regarded as an *algebraic integer*.

Are some numbers with an irreducible 2 in the denominator then *algebraic integers*, or do they become mere *algebraic numbers* because of their denominator, causing a domain switch as a result?

----------


## YesNo

That's how I follow it also, in bits and pieces. I get stuck, I stop and look for clues elsewhere. 

I have heard that the reason to look at algebraic number fields and their rings of integers is to help us understand better the rational integers, Z. I wonder what insights have been gained? Perhaps insights into Fermat's Last Theorem? Just one of the many questions I have at the moment.

----------


## YesNo

> The last paragraph of your second to the last post spills a lot of light on the significance of the discriminant in the theory: It is the discriminant itself whose value (mod 4) we are looking at to make our determination of what can be regarded as an *algebraic integer*.
> 
> Are some numbers with an irreducible 2 in the denominator then *algebraic integers*, or do they become mere *algebraic numbers* because of their denominator, causing a domain switch as a result?


I think they would be algebraic integers, not just algebraic numbers. They don't look like integers because they can't be written as r + st where r and s are in Z, but they are integers because they are roots of a monic polynomial, that is, one where the coefficient of the x2 term is 1.

----------


## desiresjab

But do we even need to look at the discriminat for that? Can't we just determine the value (mod 4) of the number originally adjoined to our field? Doesn't this accomplish the same thing while being much easier to perform?

What is different about the quotient fields (rings?) of √-5 and √-3 mentioned early in one of the articles we have been referencing back and forth?

I believe numbers of the form a/2 + br/2 can be algebraic integers though they do not appear to be. It may come down to what they can do. Can they help recover unique factorization? Do they qualify as an algebraic integer, or does their denominator represent a weakening which demotes these numbers to mere algebraic numbers but which are still useful for the purposes of ideals, i.e., which still qualify to help?

----------


## YesNo

I woke up this morning thinking about why integers are defined as roots of monic polynomials. I probably don't have the full picture. 

I can see how the integers Z can be defined using first degree monic polynomials. (A monic polynomial is a polynomial over the rationals Q, all coefficients are rational numbers, that when written as a polynomial over Z by getting rid of any denominators the highest nonzero coefficient is 1.) A first degree polynomial looks like this x+3/2 = 0. That has 1 as the coefficient of the highest term, x, but it is not monic because 3/2 is not an integer. It could be written as 2x + 3 = 0 which shows the coefficient of the highest term, 2, is not 1. So the root of that polynomial, 3/2, is not an integer, which is what we expect. If we look at 4x + 12 = 0 and divide by 4, we get x + 3 = 0. That is a monic polynomial since all coefficients are integers and the highest term has 1 as a coefficient. The root is 3 which we know is an integer.

If we try to do the same thing for second degree polynomials we find that the roots of some of these monic polynomials look like a/2 + br/2. That is because the quadratic formula has a 2 in the denominator. Sometimes this 2 cancels out. Whether it does or not depends on the discriminant. If the discriminant is -3 it doesn't cancel out so if we consider the ring of integers in Q(√-3) some of them will look like a/2 + b√-3/2 where a and b are integers. So the ring of integers of Q(√-3), written OQ(√-3), has more elements than the ring formed by extending Z with √-3, that is, Z[√-3]. 

What are the differences? It turns out the ring of integers of Q(√-3) is a unique factorization domain, but Z[√-3] isn't even a Dedekind domain. 

If we look at Q(√-5), the discriminant, -5, is congruent to 3 mod 4 and so there are no integers in the ring of integers of Q(√-5) that look like a/2 + b√-5/2. The 2 in the denominator of the quadratic formula for roots of monic polynomials with -5 as the discriminant cancel out. They all look like a + b√-5. So the ring of integers of Q(√-5) is the same as Z[√-5]. The problem is this ring of integers does not have unique factorization. We will need to use ideals (special subsets of elements) to recover unique factorization rather than individual elements from that ring of integers. 

For the first question, looking at the number originally adjoined to the field Q is almost the same thing as looking at the discriminant. For example when we adjoin √-5 to Q to get the algebraic number field Q(√-5), -5 is the discriminant. It just has a radical sign over it. The discriminant is the part under the radical sign in the quadratic formula. 

The discriminant does not tell us if the ring of integers will be a unique factorization domain or not. It only tells us if there are integers of the form a/2 + br/2 in the ring of integers of Q extended by the square root of the discriminant. I don't know how they determined which of these algebraic number fields have unique factorization. That would be another piece of the puzzle for me to find out.

----------


## breslevmeir

i like

----------


## desiresjab

I believe the root of 4x+12 would be -3 not 3.

That polynomials must be reduced to a minimum polynomial which has to be monic to be of use to us in recovering unique factorization, is understood. However, it seems to me that finding the minimum polynomial for even simple equations is not particularly simple and can involve a lot of algebraic labor.

I still do not see enough to relate this subject to quadratic reciprocity, which I find disturbing. I keep asking myself if the extra factor of 2 in 4n numbers which we observed in the Eisenstein diagram has anything to do with the different behavior of 4n+1 and 4n+3 numbers we observe here. Where does the expression of that extra 2 take place in this arena, if there is any? The only place I can see so far where there is an extra 2 sticking out like a sore thumb is in the denomoinator of those numbers of the form a/2 + br/2, but I do not see clearly if and how they relate.

And I still cannot decide if OQ(√-3) represents a quotient ring, i.e., the kind often represented by R/I. Those rings usually involve division by an ideal which contains a complex number, from what I have seen.

For instance, the possible remainders when dividing by 3i (which involves the Ideal of (3)), are 0, i, 1, 1+i, 1+2i, 2, 2i, 2+i, 2+2i, which is an example given in the chapter 11 link.

Having a way to state long chains of mathematical symbols in spoken language is important to me.

For something like R[√-3], I simply say _R adjoined to the square root of -3_. 

I say it the same way for Q(√-3), and I am wondering if there is a more appropriate way to speak it once we reach the Rationals.

----------


## YesNo

Yes, you are right. It is -3 and not 3. I got it wrong.

For first and second degree polynomials finding the roots are easy. There exist general algebraic formulas for third and fourth degree polynomials, so the general case is relatively easy there as well, but I think it stops with fifth degree polynomials. No general formula exists. So, I agree it gets harder to find roots of higher degree polynomials.

I don't see where it relates to quadratic reciprocity at the moment either. Given a monic polynomial of second degree, one has something like x2+bx+c where b and c are integers. Then use the quadratic formula to find the roots and see why knowing whether the discriminant is congruent to 1 or 3 mod 4 determines if we will be able to cancel out the 2 in the denominator of the quadratic formula. I think the details were given in one of the links for math stackexchange.

The ring of integers OQ(√-3) would not be quotient ring. It would be an infinite ring like the integers, Z. To get a quotient ring one would have to take an ideal and mod out by it: http://mathworld.wolfram.com/QuotientRing.html I suspect a quotient ring would be a finite ring (at least for Z). The ring of integers is an infinite ring containing all the integers in the algebraic number field Q(sqrt(-3)).

I use the following words which might not be correct terminology: R[√-3] is a module over the ring R extended with the square root of -3. 

Q(√-3) would be a vector space (or module) over the field (ring) of rational numbers, Q, extended with the square root of -3. It is an algebraic number field. https://en.wikipedia.org/wiki/Algebraic_number_field

----------


## YesNo

Here is the stackexchange article: http://math.stackexchange.com/questi...ed-in-that-way See Adam Hughes response to the question. This is where the mod 4 criteria is helpful. There is nothing about quadratic residues in the answer that I see.

However, reading this again I now picked up on the idea of "integral closure" which might be another key to understanding this better. Here is a definition of an "integral element": https://en.wikipedia.org/wiki/Integral_element As I see it at the moment, using this term, the reason why Z[√-3] is not the way to get to the ring of integers of the algebraic number field Q(√-3) is because Z[√-3] is not integrally closed in Q(√-3). It needs more integral elements from Q(√-3).

I don't understand this either, but that it seems to make sense makes it worth trying to understand better.

----------


## desiresjab

> Here is the stackexchange article: http://math.stackexchange.com/questi...ed-in-that-way See Adam Hughes response to the question. This is where the mod 4 criteria is helpful. There is nothing about quadratic residues in the answer that I see.
> 
> However, reading this again I now picked up on the idea of "integral closure" which might be another key to understanding this better. Here is a definition of an "integral element": https://en.wikipedia.org/wiki/Integral_element As I see it at the moment, using this term, the reason why Z[√-3] is not the way to get to the ring of integers of the algebraic number field Q(√-3) is because Z[√-3] is not integrally closed in Q(√-3). It needs more integral elements from Q(√-3).
> 
> I don't understand this either, but that it seems to make sense makes it worth trying to understand better.


The article says that below is the minimum polynomial for a+b√D. I am not sure how they got that. I realize it is a simple concept and manipulation, but I am unable to make this small leap. It would help a lot if you performed the manipulations that got it to the form below. I am not sure how to get _a_ involved. 

pα(x)=x2−2ax+(a2−Db2)

----------


## desiresjab

I say the following as, "Z mod six Z," taking my example from the abstract algebra course I watched last year.

Z6/6Z

----------


## YesNo

> The article says that below is the minimum polynomial for a+b√D. I am not sure how they got that. I realize it is a simple concept and manipulation, but I am unable to make this small leap. It would help a lot if you performed the manipulations that got it to the form below. I am not sure how to get _a_ involved. 
> 
> pα(x)=x2−2ax+(a2−Db2)


If a+b√D is a root, then so is a-b√D a root. Multiplying the two linear polynomials together we get (x - (a+b√D))(x - (a-b√D)) = x2 - (a+b√D)x - (a-b√D)x + a2-Db2 = x2−2ax+(a2−Db2)

If one lets r and s be two roots of a quadratic equation, multiplying linear polynomials together leads to a general solution: (x - r)(x - s) = x2 - (r+s)x + rs The coefficient of the x term is negative the sum of the roots and the unit term is the product of the roots.

----------


## desiresjab

> If a+b√D is a root, then so is a-b√D a root. Multiplying the two linear polynomials together we get (x - (a+b√D))(x - (a-b√D)) = x2 - (a+b√D)x - (a-b√D)x + a2-Db2 = x2−2ax+(a2−Db2)
> 
> If one lets r and s be two roots of a quadratic equation, multiplying linear polynomials together leads to a general solution: (x - r)(x - s) = x2 - (r+s)x + rs The coefficient of the x term is negative the sum of the roots and the unit term is the product of the roots.


Alrightee, that is clear enough. I got it. I should have had it before, but sometimes merely hearing someone you trust say something clears it up better than texts. Of course I am supposed to know what you had to write out, having already seen it multiple times, but somehow it did not completely stick. As long as I finally comprehend things, I cannot concern myself with the time it takes, since that slows me down even more.

----------


## desiresjab

> If a+b√D is a root, then so is a-b√D a root. Multiplying the two linear polynomials together we get (x - (a+b√D))(x - (a-b√D)) = x2 - (a+b√D)x - (a-b√D)x + a2-Db2 = x2−2ax+(a2−Db2)
> 
> If one lets r and s be two roots of a quadratic equation, multiplying linear polynomials together leads to a general solution: (x - r)(x - s) = x2 - (r+s)x + rs The coefficient of the x term is negative the sum of the roots and the unit term is the product of the roots.


I seem to have carelessly lost a long post that was supposed to go here.

----------


## desiresjab

The minimum polynomial when extending a field by √5 is 

x2=5, or x2-5=0. Applying the quadratic formula:

-0+-(√02-4·1·-5)/2= +-√4·5=2√5

For √-5, which is a 4n+3 number, the minimum polynomial is 

x2=-5, or x2+5=0 Applying the quadratric formula:

-0+-√-20, which merely reduces to 2√-5

I do not see the real difference in terms of 5 being a 4n+1 number and -5 being a 4n+3 number. In both cases above, the 2 beneath the discriminants factors out. What simpleton mistake have I made this time? I have to be looking at something major wrong.

----------


## YesNo

Your questions make me wonder what is meant by a minimal polynomial. I think it means that given a root, such as √5 or (1+√5)/2, the minimal polynomial of that root is a polynomial with rational coefficients with minimal degree. If we have square roots of non-square integers, these would have a second degree polynomial as their minimal polynomials. Rational numbers would have first degree polynomials as their minimal polynomial. One can always multiply that polynomial by some other linear polynomial, say x-7, to get a larger degree polynomial. The minimal polynomial is associated with specific algebraic numbers, the roots of that polynomial. (The term of the minimal polynomial with the largest degree should have 1 as its coefficient. This guarantees uniqueness of that polynomial. It can be found since the coefficients of the minimal polynomial are over a field such as the rationals, Q.)

What one has with Q(√5) are all of the algebraic numbers that can be written as a + b√5 with a and b being rational numbers. All the rational numbers are in this field because b could be 0. 

What the discriminant being congruent to 1 or 3 mod 4 is supposed to tell us is whether there exist algebraic integers in Q(√5) that have a 2 in the denominator or not. The 2 won't cancel in all cases if 5 is congruent to 1 mod 4. 

What does it mean to be an algebraic integer rather than just another algebraic number? The minimal polynomial has integer coefficients and the coefficient of the largest non-zero term is 1.

As an example, consider (1+√5)/2. 

This is an algebraic number in Q(√5) because (1/2)+(1/2)√5 is of the form a + b√5 where a and b are rational numbers, in this case both rational numbers are 1/2. 

To find its minimal polynomial, I used the idea that if (1+√5)/2 is a root then so is (1-√5)/2. (That might be worth trying to prove, but I can't think of the proof at the moment.) If r and s are roots of a quadratic polynomial, then (x - r)(x - s) = x2 - (r+s)x + rs. So, to get the middle term I add the two roots (1+√5)/2 and (1-√5)/2. I get 1 and then subtract it. To get the unit term I multiply those two roots to get -1. So the minimal polynomial is x2-x-1. Using the quadratic formula, I check that (1+√5)/2 is a root of that polynomial. 

Is it an algebraic integer? Yes. The coefficients of its minimal polynomial are all integers and the highest term has coefficient of 1.

So Q(√5) has algebraic integers that have a 2 in the denominators as the determinant tells us to expect. That means the ring of integers of Q(√5) cannot be completely represented by Z[√5]. There are algebraic integers in Q(√5) that don't have this 2 in the denominator (such as all the rational integers and √5), but we are only interested in knowing if some of them need that 2 in the denominator.

----------


## desiresjab

Great post. Nice amount for me to think about.

----------


## desiresjab

> Your questions make me wonder what is meant by a minimal polynomial. I think it means that given a root, such as √5 or (1+√5)/2, the minimal polynomial of that root is a polynomial with rational coefficients with minimal degree. If we have square roots of non-square integers, these would have a second degree polynomial as their minimal polynomials. Rational numbers would have first degree polynomials as their minimal polynomial. One can always multiply that polynomial by some other linear polynomial, say x-7, to get a larger degree polynomial. The minimal polynomial is associated with specific algebraic numbers, the roots of that polynomial. (The term of the minimal polynomial with the largest degree should have 1 as its coefficient. This guarantees uniqueness of that polynomial. It can be found since the coefficients of the minimal polynomial are over a field such as the rationals, Q.)
> 
> What one has with Q(√5) are all of the algebraic numbers that can be written as a + b√5 with a and b being rational numbers. All the rational numbers are in this field because b could be 0. 
> 
> What the discriminant being congruent to 1 or 3 mod 4 is supposed to tell us is whether there exist algebraic integers in Q(√5) that have a 2 in the denominator or not. The 2 won't cancel in all cases if 5 is congruent to 1 mod 4. 
> 
> What does it mean to be an algebraic integer rather than just another algebraic number? The minimal polynomial has integer coefficients and the coefficient of the largest non-zero term is 1.
> 
> As an example, consider (1+√5)/2. 
> ...


Are you saying...well, exactly what are you calling an algebraic integer--x2-x-1, or one of its roots? Do those roots with a 2 in the denominator become classified as algebraic integers whenever they are the roots of monic polynomials?

Even when you do a great job I still have dumb questions, you see.

----------


## YesNo

The algebraic integers are the roots of monic polynomials with integer coefficients, that is with coefficients in Z. Algebraic numbers in general are defined in a similar way. They are the roots of monic polynomials with rational coefficients, that is with coefficients in Q. 

For quadratic number fields sometimes those roots have a 2 in the denominator such as (1-√5)/2 which comes from the quadratic formula with the 2a in the denominator.

----------


## desiresjab

Yes, very good. Your explanations are quite acceptable. I take it, then, that an algebraic number with a denominator of 2 does not become an algebraic integer just because it happens to be the root of a monic polynomial with integer coefficients. It is still only an algebraic number, but an important one now on equal footing with an algebraic integer that is a root because it is a root itself. 

The denominator of these roots tells us whether the root is an algebraic integer or merely a "_rootish_," algebraic number.

Multiplying the roots together as you did one can see some action in the denominator. However, working with the quadratic formula in the usual high school fashion, nothing seems to reveal itself with regard to 4n. At that level I do not detect anything about that which we are speaking. I already know there will be an unreduced 2 in the denominator at the end, or there will not be, and I know why it will or will not be there.

I feel I am getting pretty close, but there are still small pieces here and there I do not have in place yet.

When you start out with a field extension you already can see what "type," the root will be, whether it will be a 4n+1 or a 4n+3 number, for instance. When you start out with the quadratic formula you are trying to determine what the roots are. Once you find a root you could always assume a field extension was made earlier. I do not see anything discouraging me from looking at it this way.

Unless I am making a field extension, I have to use the quadratic formula to find roots, and therefore will not know the relationship of my roots to 4n in advance, or not until the formula has worked its root-finding magic.

At least that is my current view of the whole situation. Some of it is bound to be deficient, I suppose, downright incorrect, or short-sighted.

----------


## desiresjab

And, by God, I believe I know (at least I hope I am right) that every algebraic integer is the root of some minimum polynomial with integer coefficients. But of course every root is not reciprocatively an algebraic integer (because we know of the existence of _some_ monic polynomials with integer coefficients whose roots are yet of the form a/2 + br/2).

I am just musing out loud to see if I am correct or incorrect on a few ideas. Jigsaw puzzles are completed at exponential acceleration as time increases.

----------


## YesNo

An algebraic number I think could be defined as the root of a monic polynomial where the coefficients come from the rational numbers, Q. The reason to use a monic polynomial is to avoid having many of these minimal polynomials. For example, x - 7 = 0 gives the same root, 7, as 2x - 14 = 0 does. Since the rational numbers form a field, they have inverses so we can divide by the coefficient of the term with the highest power of x and make it 1. For example, 2x - 3 = 0 can be written as x - (3/2) = 0 and the root 3/2 is an algebraic number. In this case it is a rational number as well.

If it turns out that this minimal, monic polynomial has all integer coefficients, then one defines that root as being not only an algebraic number, but also an algebraic integer no matter what it looks like. That means (1-√5)/2, even though there is a 2 in the denominator, is also an algebraic integer. It is the root of a minimal, monic polynomial with integer coefficients. So it is an algebraic integer.

If we are looking at quadratic algebraic number fields, that is fields where the rationals Q are extended by the square root of a non-square integer, such as √5, then checking whether 5 is congruent to 1 or 3 mod 4 will tell us if the algebraic integers could have a 2 in the denominator or not.

----------


## desiresjab

> An algebraic number I think could be defined as the root of a monic polynomial where the coefficients come from the rational numbers, Q. The reason to use a monic polynomial is to avoid having many of these minimal polynomials. For example, x - 7 = 0 gives the same root, 7, as 2x - 14 = 0 does. Since the rational numbers form a field, they have inverses so we can divide by the coefficient of the term with the highest power of x and make it 1. For example, 2x - 3 = 0 can be written as x - (3/2) = 0 and the root 3/2 is an algebraic number. In this case it is a rational number as well.
> 
> If it turns out that this minimal, monic polynomial has all integer coefficients, then one defines that root as being not only an algebraic number, but also an algebraic integer no matter what it looks like. That means (1-√5)/2, even though there is a 2 in the denominator, is also an algebraic integer. It is the root of a minimal, monic polynomial with integer coefficients. So it is an algebraic integer.
> 
> If we are looking at quadratic algebraic number fields, that is fields where the rationals Q are extended by the square root of a non-square integer, such as √5, then checking whether 5 is congruent to 1 or 3 mod 4 will tell us if the algebraic integers could have a 2 in the denominator or not.


Ah, in red is what I have been driving at and harping on, I don't know why. It was an instinct or the unconscious memory of something I read. It seems now that these numbers with 2 in the denominator can indeed be classified as algebraic integers as long as they are the roots of monic polynomials with integer (no, rational) coefficients.

Hmmm...I still don't know if it is rational coefficients or integer ones.

----------


## desiresjab

Well, lad, the only thing we have not done is the long, 19th century algebraic manipulations where these ideas came from. I don't know if we need to do that. Between your forced didacticism and my own efforts understanding seems to have arrived.

* * * * *

Now we must ask: have we moved an inch, cosmologically speaking? What do ideals (and for that matter quadratic reciprocity) have to do with cosmology?

Well, do not forget, the deeper structure which we believe silently rules the universe and ourselves is what we hoped to catch a glimpse of by delving into exotic maths. Whether we have done that is a matter for debate, perhaps. 

Even when numbers look exhausted and capable of no more order, enough genius is always able to find more structure nested in them. Ideals demonstrate this. I deals did not capture the ultimate order. Ideals could (ideally) apply in only some of the cases where unique factorization is not possible among the integers, polynomials _et al_. The theory made inroads, it did not settle all matters once and for all; it pointed a way forward.

We will continue to discover deeper and less accessible structures within numbers themselves, which will eventually connect to our own consciousness, I believe. Our consciousness likely hails from some deep structure we have barely glimpsed. Someday a connection will be made between us and the arithmetic structure we keep unraveling, is my one trusted belief.

----------


## YesNo

I don't know if there is any cosmological significance in this, but there may be. I have enjoyed thinking about it and I did read most of Dedekind's book finally after having forgotten I bought it long ago.

Symmetry is supposed to be related to cosmological ideas. I forget how at the moment. That might be a place to continue pursuing the relationship between cosmology and mathematics. Then, of course, there is also a study of tensors and the Lorentz transformation for special relativity. Here is Einstein's "The Meaning of Relativity" which I have read parts of in the past: http://www.gutenberg.org/files/36276...68a2f9e44ff27b

One place where I think mathematics leads cosmologists astray is in a belief in constants. For example, is the speed of light really a constant? Is big G, the gravitational constant, really constant? It is convenient for the mathematics that they are constant, but I don't know how we would be able to tell.

----------


## desiresjab

The orchestration of coordinated activity of microtubules in brain neurons is one of the initial steps to understanding consciousness as a mathematical phenomenon, i.e., one which can be explained and predicted using mathematical tools some of which might not yet exist.

When there is enough "accord," the microtubules act in unison like a school of swallows banking and twisting at high speed without collisions.

I put accord in quotations. Call it a metaphor for consciousness. Scientists are busy constructing models of consciousness in 248 dimensional matrices compressed to 8 dimensions. They are looking for ways to make the swallows act in unison.

These are the initial baby steps. Where we get to is anyone's guess. My pal YesNo is quite convinced by the Searles argument that strong AI will never come about. Actually, I take YesNo more seriously than I take Searles. To me, Searles has constructed a semantic argument I do not feel compelled to even challenge. It is like one of those old semantic arguments by Kant or Spinoza that seem quaint and innocent enough these days to bring a smile to our lips at yesterday's children.

* * * * *

There is this notion (unshakeable for most) that our own intelligence is real intelligence, and everything else, if it is not made of meat or DNA, is an artificial intelligence.

I beg to differ. Only if we came about entirely without the aid of any form of consciousness is our intelligence the "natural one," the real one. Otherwise, we ourselves are created, and therefore artificial.

If our intelligence (earthly) is therefore artificial, having been aided into existence or manifestation by any form of consciousness whatsoever, then I say the job of creating artificial intelligence is already a done deal, having been accomplished at least once to date.

By extrapolation I might contend that since YesNo and others believe in a consciousness that permeates individual atomic particles, the form of intelligence we represent can hardly have been attained without contact with any individual particles! This makes us and our intelligence artificial since we defined artificial as having been aided in any way by any form of consciousness, even the wee consciousness found in individual particles.

----------


## desiresjab

I did not see that you had come back to answer.

Yes, symmetry is a major idea. To really study it one should probably delve head first into group theory. Boy, I don't know if I am ready for that right now. I will go only where I have to out of intrigue. Dumb people have to limit their enterprises in some way. I do it by interest alone, letting nothing else interfere. There is always something in math I feel dumb for not knowing, and this provides my main drive. But the motivation is to follow structure deeper.

----------


## desiresjab

Would you draw any distinctions between constants from physics like the speed of light and a geometrical constant like pi or an arithmetical one like e? For it seems like a number which is its own derivative will not change under any circumstances I can predict.

----------


## YesNo

The most that anyone can come up with is a model of reality. This is an objective map. It helps make predictions but just because we have a map does not mean that map IS reality. At most it is only a part of reality that we find interesting enough to want to make predictions about. In particular it does not contain our subjective perspective on reality.

The reason I reject AI (both strong and weak version) is because the AI computer is a deterministic-random machine. It is pure objectivity, like a table or chair. It cannot make a choice that is not part of an optimization process that can explain the decision. We can make such choices. A photon can make such "choices" as well. There is no programming or optimization underlying quantum indeterminism because there are no hidden variables to explain the indeterminism.

----------


## YesNo

> Would you draw any distinctions between constants from physics like the speed of light and a geometrical constant like pi or an arithmetical one like e? For it seems like a number which is its own derivative will not change under any circumstances I can predict.


In mathematics, the constants such as pi or e do not change. They are not empirically derived. However, physical "constants" are empirically measured. They are useful up to a certain number of decimal places. We assume they are good for any number of decimal places and that they do not change with time. But how are we going to know that empirically?

----------


## desiresjab

We have grasped the essence of ideals. I feel my understanding of ideals is on a par with my grasp of QR. It is time to move on unless someone has a cogent remark about ideals at this point.

I do not have an area of math in mind to visit next that I feel relates to cosmology or the deeper understanding of structure in the universe. Exploring fancy counting methods from probablity theory would be interesting and fun but seems far away from cosmological pursuits to me. We only want math involved if it offers the possibility of deep glances at structure, not math merely for the sake of having it.

YesNo may have an area in mind that he feels is relevant.

* * * * *

Belief #1

For the moment, I would like to turn back to what I feel is the surest idea in my philosophy.

For the universe--for all things--to come out of nothingness, is impossible. For nothingness means not only the tangible but the intangible as well. In true nothingness, there would be no existence of any kind. Even the potential for something to exist later is not permissible in nothingness, for that potential would be something which existed, though not tangible.

The forced result is that *something* had to always exist. There never was a time or a state in which pure nothingness prevailed. Pure nothingness cannot _be_, it is only a concept of the imagination.

It is logically undenaible that there is something eternal in the universe. Lacking a better name, that thing is existence itself, at the minimum, and perhaps God or consciousness, if we but knew the truth.

* * * * *

Belief #2

In any possible universe with physics, 2 is the successor of 1. Universes which run backwards, universes which are p-adic, and all other apparent exceptions are easily remedied by a simple re-labling, e.g., the last event in a universe which runs backwards would become the first event under the re-labling, easily allaying the problem and casting it in its proper light as no more than a labling phenomenon. Under this belief it is forced that all universes will submit to our mathematical labling. No universe has a choice to refuse, for we are guaranteed to find mathematical lables that apply.

----------


## YesNo

I agree with your two beliefs. 

Symmetry or invariance may be a useful way to see the structure of physical models of the universe. https://en.wikipedia.org/wiki/Symmetry_(physics) I haven't looked at this closely. Relativity is an "invariance" when measuring differences between two events in space-time from any frame of reference. What invariances are there in physical models? Philosophically, what does this tell us about reality?

I have been re-reading Moffat's "Reinventing Gravity". I would like to understand the theory of gravity well enough to make more sense out of Moffat's modification of it. It appears that Einstein's theory of gravity breaks down when discussing galaxies and larger clusters of galaxies. It no longer makes accurate predictions unless one assumes there is dark matter and dark energy present. 

Then there is also quantum physics. It is easy to confuse the model with reality here, but one has to know the model to philosophically assess the confusion. These all tie together. I don't think it is possible to compress an atom into a black hole which makes me wonder if black holes are possible. If they are not then Einstein's theory of gravity needs to be modified.

For me, the whole question is philosophical, but I need to understand the mathematics and physical theory to ground that philosophy.

----------


## desiresjab

It usually seems to me that the deepest propositions from physics are doomed to failure and usurpation. For instance, _What is the nature of matter?_ is to me likely a doomed question any one answer to which we will never settle on for long. Compared to this, the absolute truth of Quadratic Reciprocity stands out like a granite monument of absolute and unchanging consistency.

Each temporary answer we accept along the way will take us far and enable many new miracles of technology. But in the end each will show its limitations and contradictions which prepare the way for a new theory to supplant it.

Each new advancement will have a mathematical framework, sometimes consisting of newly invented or discovered mathematics. When the physical theory it once supported has been supplanted, the skeletal remains of these systems will consist of a funeral scaffolding of mathematics which remains true of itself without the insufficient physical theory it was once thought to support.

In other words, it is not mathematics which we cannot know with certainty, but the nature of physical reality which eludes and will continue to eldue us. Each best theory of physical reality will become insufficient and contradictory. On the other hand, addition, subtraction, derivatives, integrals and matrices are just as useful, true and efficient as they ever were at _containing_ certain aspects of physical relaity.

----------


## YesNo

That's how I see it also. Mathematics is certain. Technology, when it works, is useful. Physical theory changes.

I found Frank Wilczek's "A Beautiful Question" in the library. It is about symmetry and physics. I expect it to be a survey of ideas he finds beautiful in science.

----------


## desiresjab

I have avoided studies of symmetry because I feel one must know a lot of Group Theory, since that theory is known for ideas on symmetry. But, yes, it is highly provocative, and probably bears a good relationship to the deeper structures we are seeking.

----------


## YesNo

I stopped reading Wilczek's book after reading the introduction. All of a sudden I felt like it might not be what I needed at the moment.

I am sort of avoiding symmetry for the same reason, but if I think of symmetry as a way to achieve "invariance" in physics theory I start looking at it differently. I do have Carmichael's Introduction to Groups of Finite Order. That will take some time to read and most of it may not be relevant to physics.

----------


## desiresjab

I am not searching for religion. But one person I am acquainted with through music and who can suddenly turn anti-religious in the typical ranting way, has already tentatively grouped me with the "religious kooks," I can tell, because I related my recent reflections to him. The fact that I even related them to him indicates the paucity of philosophical minds in this berg.

* * * * *

Let us recap. Over in the thread The Fall under the category of Religion, I offered a proof of the existence of a consciousness if there was a beginning of everything. It is simple, and I am satisfied with it, for the moment.

Going to Scenario #1 where absolutely nothing existed, we see that something had to exist anyway, namely the potential for all other things to come about, or else we could not be here now.

To hazard what this potential actually consisted of, we were only able to come up with two possibilities:

1 Some sort of _Meta thing_ that could exist under these conditions.

2 A Consciousness, i.e., a Will.

No one offered any other alternatives. We were able to dispel the notion of Meta things quite easily, as it turned out. Not a thing else exists, remember, even ideas and other abstractions. The Meta things are like precursors of things to come. Except there is no, "to come." There is no time. The Meta thing cannot become real. It cannot move from its original condition. Its so-called potential to create real things is only an illusion after all.

The remaining possibility is Consciousness. It is a very good candidate because we know so little about it. It is the _only_ other candidate, which is quite compelling indeed. We do not know where consciousness comes from, what truly restrains and produces it, nor indeed even what it is.

The fact that Consciousness _can_ have an Imagination, means to me that even if Time did not exist, it could be imagined by the Primal Consciousness. The same with light and all the other phenomena of our universe. That is something Meta things cannot do, unless they, too, had imaginations, which would end that part of the discussion, and in fact does end it to my satisfaction. Also, Meta things contain no way of kickstarting the creaton of _everything_.

By this point in our discussion it is okay to sometimes use the word God, since we mean Primal Consciousness by it, and are not ready to assign qualities to God unless we find there is one.

The basic argument is a very old one called the First Cause argument. What satisfies me is that there is no viable alternative to consciousness. In order for Meta things to be able to perform the tasks that the single concept of consciousness could, it would need to consist of myriad other things, such as built in programs for kickstarting Time, a concept, remember, which does not exist. Without an imagination, Meta things cannot conceive of Time, either. Occam's razor seems to demand that consciousness be our philosophical supposition.

The argument has some contingencies regarding the nature of consciousness and its ability to operate under the condition of _nothingness_. It is the only candidate that might be able to do this. No one else has put forward another. 

That is the conclusion of arguments for Scenario #1.

* * * * *

Scenario #2 is the only other possible Scenario. It is the scenario under which we assume _things_ have existed forever. We have seen that it is an undenaible fact that something always had to exist. In Scenario #1 it turned out to be potential in the form of consciousness with an imagination, as near as we could figure.

Under Scenario #2 the _world_, as in something or the other, as in everything, always existed, it was not created. This scenario is philosophically a tougher nut to crack. Immediately, it is difficult to ascertain a logical necessity for there being a Primal Consciousness. Things simply always were. Hmmmm...

Such a proposition must surely mean life always existed, too. We are talking about times beyond this universe, a trillion Big Bangs ago. This would amount to an absolute certainty that we are not the first conscious life forms. This is asymptotically close to a certainty. Infinite time before us has produced every individual type of thing before us, because there was infinite time to accomplish it in. There actually _is_ nothing new under the sun, in this scenario. There cannot be. 
Dwell on it, you will see.

Lack of dwelling on it is one problem in talking about these things with folk who are just going about their daily business. Unless one has dwelled and meditated on the exact topics for hours on end, they are received as just words, which are then processed in the normal way with all predjudices present, right after the grocery list.

Meditate on the concepts to know the truth.

I will now prove the necessity of God under the remaining Scenraio, #2.

You missed it again. Me, too. But, ah, now I have seen it. People underestimate infinity. They underestimate infinite Time. There has been time under this scenario for everything and anything to come about. So a God came about by necessity. This God would not have created the universe (as in everything), but is a God nonetheless.

I am under no injunction to make my God the creator of the universe. I am not a Christian, Moslem or Hindu.

Infinite time produces everything, since things always existed. We know for a fact that if Scenario #2 is true, that it produced consciousness one way or another, which is a step toward God already.

It is improper to view ourselves as the first to evolve toward the Godly, simply because we cannot be, in a Scenario where Time and Things have always existed. It would be logically erroneous for us to view ourselves as the first who have started toward Godhood. Godhood already has to have been attaineded, one way or another, one scenario or another. We have to live with that. It is logically sound.

That concludes arguments fir the case of Scenario #2. I have proven that under any case imaginable, logically God exists.

* * * * *

No single religion or denomination thereof will approve, I am sure.

Remember, I did _not say_ God was _not_ immortal and had not been around forever. I said that if he wasn't around forever, he was around by now anyway. As far back in time as you want to go, infinite time has already existed before that point.

In a universe with infinite time, there are only finite arrangements of all the particles. Even if it takes an octillion years, the exact arrangement of particles which constitutes you will come around again, and you will be born into a world which is the exactly same, nearly the same, or radically different from the one you know now. All of them will come, and have come, given infinite time.

What has existed? Nothing but everything. If not in this universe, which may be finite, then it has existed in another one. Everything we have imagined in our fiction has already been reality--giant robots fighting mankind's battles, dragons, demon possession, time travel--have all inevitably existed under Scenario #2, plus many more things we have not yet imagijned. The cinch is that anything we do imagine has already happened.

* * * * *

God exists, folks. Like it. Unfortunately, the devil and all his demons necessarily do, too.

A Scenario where God is not all-powerful in the sense of having created everything, comes together logically very nicely once one is over the initial hump of realizing there necessarily is a God in this scenario, too. As powerful as you want to name, but not the creator of the universe. Not infinitely powerful, but as powerful as you can name. But possibly as old as the universe anyway, and at least its ultimate inhabitant.

Like it. There is a God who may or may not have created the universe, but who nonetheless may be as "old," as existence itself.

There it is.

----------


## YesNo

I am thinking along the same lines. At place where we might disagree is here (for some reason I can't quote a post, so I will just copy it):

"In a universe with infinite time, there are only finite arrangements of all the particles. Even if it takes an octillion years, the exact arrangement of particles which constitutes you will come around again, and you will be born into a world which is the exactly same, nearly the same, or radically different from the one you know now. All of them will come, and have come, given infinite time."

If unconscious things do exist and they can be reduced to particles and we are the result of them then I think this would be true. But (1) do unconscious things exist and (2) are they reducible to particles and (3) is our consciousness reducible to them? If there are unconscious particles then what we are may be infinite, with infinite variability and so in a finite amount of time everything could be different.

----------


## desiresjab

The idea does imply the old notion of randomness and that particles have no other _reason_ to get together. In a mechanistic universe with infinite time available, particle arrangements are finite and must repeat. I am sure you willl have no objection to that much.

Particle arrangements may draw consciousness to them rather create consciousness.

There is also the possibility that consciousness (God) imparted some of itself to us--the conscious part. Where was it stored, so that generation after generation now imparts it to their own kind who are conscious at birth? I do not believe Amoeba are conscious, because they do not have a reflective sense of self. You are convinced electrons are conscious, so maybe you have no trouble accepting an amoeba into the fold.

Or perhaps (something like the mechanistic view again) certain arrangements naturally provide consciousness into the arrangement. _The arrangement did it, mama_.

Now a man (this man, at least) has to have a pretty good reason for choosing one of these over the other. So far I do not have that good reason. I refuse to believe and defend something simply because I fervently want it to be true. I want there to be an afterlife. But so far I have not proven or demonstrated convincingly that there is one, I have not shown the logical necessity or the likely existence of one. To say that infinite time would create anything, including an afterlife, is not good enough in this case, as it was in the case of physical particles, since we can posit nothing yet as to the nature of this afterlife, if it did exist--what it is made of, and the like.

Of course, not knowing what consciousness is made of, leaves us in the same conundrun.. I have a strong inkling the afterlife is made only of consciousness, however. This final component of the scenario may be made of only itself, indivisible.

* * * * *

The notion that God created us might only mean he imbued an animal with consciousness, not that he directed our evolution from a single cell. These things must all be reasoned out.

The next place to dwell is on the likely nature of God, since we have shown the existence of at least a primal consciousness. In Scenario #1 we could not prove that God was not already dead, however, just that he had existed in the beginning. In Scenario #2 it did not matter if God had died, for he would come around again, eventually creating a version of God that was not temporal. 

* * * * *

I lean heavily toward Scenario #2. Christians would like Scenario #1 better. But I guess I am averse to the idea of beginnings. I already know existence was already here. If existence was here, I think everything about existence was here.

God may be in an existence he did not create. The universe he created may only be this artificial one we experience. That is where God would have absolute and universal power, but perhaps not in the larger Scenario he is part of.

To my way of thinking, we are already artificial, along with our whole universe, if we were indeed "created." All creations are artifice, artificial, not original, purposefully made by another consciousness. We would have to admit we are artificial, if we believe we were created. It should not shock Christians that our reality is less real than the reality of God. In the Bible I believe what God is promising to obedient servants is a further taste of that higher reality.

----------


## YesNo

I agree with what you say about a mechanistic universe. The arrangement of finite particles would repeatedly return to a same arrangement. However, I don't see our universe as mechanistic. So the argument is hypothetical for me.

I don't see consciousness as dependent on self-reflection. Nor the ability to make a choice depend on self-reflection. Our choices appear with prior causes or they would not be "free". Our self-reflective reason rationalizes these prior free choices. So I could have consciousness, characterized as an ability to make a choice, where ever I could show that neither determinism nor uniformly distributed random processes can explain the behavior of something. That would include quantum reality.

Traditionally God does more than "create" the universe like one might create a computer and then let it run down. That's an atheistic simplification of a God to argue against its existence. I agree with them. Such deities do not exist. Besides they are mechanistic and their existence would assume the universe were mechanistic, which it is not. Such deities have nothing to do with what people who are theistic mean by "God". God also sustains the universe, that is, keeps it in being constantly. So there is no way for Him to be already dead. 

In Scenario 1, things had a beginning. In Scenario 2, things are eternal. One needs to know what "things" are. Defining things is as difficult as defining consciousness. 

Consider Scenario 1: The universe had a beginning. Some consciousness preceded this beginning, but nothing comes from nothing, so there is no-thing now under that scenario. There is only consciousness that gets manifested to us as objects. Our present universe looks like this given the big bang.

Consider Scenario 2: The universe did not have a beginning, but because we are here, the universe does contain consciousness. We don't know that it contains anything "unconscious". What we see is what reality appears like to us. Supposing there is something unconscious in the universe, then we have to make sense out of how both conscious and unconscious reality can exist together. I don't think it can, so even in Scenario 2, all we have is consciousness.

Are we artificial? It is true that an AI robot is artificial just like the chair I am sitting on. Because of that the robot or the chair, not the reality they are made out of, is not consciousness. But we are conscious. How can something conscious be artificial?

----------


## desiresjab

You are really stretching, while I am staying logical.

The word "choice," is a bad choice for whether something is manifesting consciousness, because your standards seem so low for what constitutes a choice.

God is only for sure not dead under Scenario #1 where he created "everything," out of his imagination and must still be around to keep the light show going.

* * * * *

For a consciousness unfamiliar with ideas, what starts as an urge could grow into an idea.

The necessary Gods of Scenario #2 did not create the universe. The necessary ones (as in logically demonstrated) are products themselves of a universe infinitely old which has had to produce everything already that it was going to. And given enough time, one of the things it was going to produce were beings that made us look like amoeba at a Mensa meeting, spread out among the stars, or in hidden universes.

If God is next of kin to consciousness, God is not dead, but it had to be mentioned and considered.

Your personal beliefs are pushing way ahead of the discussion and proof. I cannot grant consciousness to particles or state flatly the universe is not mechanistic.

* * * * *

I am willing to logically speculate on certain things, and call them speculations.

In #2 it seems to me we could expect every kind of God, both good and malicious. A God powerful enough to be the devil seems likely. (There I go again trying out of the corner of my eye to rectify my speculations with Christian tradition simply because I grew up in it, when I do not actually believe the relgious part of the tradition any more than I believe Moslem teachings.)

We could expect a malicious God powerful enough to be called the Devil. Judaic/Christian tadition tells me the Devil is so powerful that God can only protect me under certain conditions. I have to behave. A father does not sentence his own "children," to an eternity of the worst kind of punishment for misbehavior, unless he has no choice. So, the God known as the Devil would be quite strong, if Scenario #2 is the real scenario.

Ignoring Biblical sources, we may postulate the devil is either an invader in another God's domain, or a rightful occupant fighting off an invader.

----------


## YesNo

What we are both doing is rationalizing our prior beliefs. We are both being logical but we haven't convinced the other. That is fine. We shouldn't aim to convince, but to use this opportunity to clarify our position for ourselves, make our rationalizations better.

The reason I use choice is because it is how one could interpret quantum physics. The standards are that the behavior cannot be explained by either determinism or uniformly distributed chance. I don't know what consciousness means for those particles. All we could see is the behavior, so this interpretation is speculation, not science. However, I think it is a more sensible speculation than to say something, like many world does, that an entirely new universe that we can't see pops into existence for every possible outcome at the quantum level to eliminate the choice interpretation.

We agree in Scenario #1 that God sustains, not only creates.

I think I understand that the Gods, both good ones and bad ones, in Scenario #2 would be combinations of particles that just happened to happen. Given an infinite amount of time everything will happen. This argument is similar to the anthropic principle. However, is Scenario #2 even appropriate for the reality we experience. Is reality really reducible to particles and is the universe mechanistic? That has to be established or at least noted that it has not been settled before one can say much about Scenario #2. In Scenario #1 we started with consciousness. That evidently exists because we are conscious. 

You used the phrase "manifesting consciousness". The more correct phrase for Scenario #1 is "consciousness manifesting things" because consciousness is the given in Scenario #1. Which brings me back to the question: what are things? We make cultural things, like chairs and computers, out of stuff or things, but culturally what we see are the objects we have made not the underlying stuff they are made out of. None of those things are conscious as a chair or a computer, although the stuff they are made out of may be conscious. The closest we get to creating something conscious is through procreation. The resulting baby is conscious unlike the chair or the computer. I think the reason the baby is different from the computer is because the baby can make choices and the computer can't. That is another reason why I keep coming back to choice.

----------


## desiresjab

Not being a Christian, I feel no pressure to posit free will, either. I don't know if we have it. We are at least free enough to believe we are making decisions and to see ourselves as having free will. We seem to ourselves just like beings with free will.

In case there is a Judaic/Christian God, it is better that I am not free. If there is a God, then no merciful being would send his children off to an eternity of punishment when they were not responsible for their own actions. Maybe we are only semi-responsible, at best. The Primal Consciousness would know this and cut us a break, if he were really merciful and compassionate.

----------


## YesNo

I'm not worried about hell. Our ability to make choices, not just imaginary ones and not with absolute freedom, makes sense to me. I see no reason to reject it.

----------


## desiresjab

Kidnapped and enslaved by the Devil is how I would interpret hell. Evil Gods must take some pleasure in pain. Or maybe their pleasure is in capturing subjects of the altruistic God. It is not out of the question that we are caught in the middle of a battle between these high lords who did not create the universe or all things. Like Pork Chop Hill, we are not important in ourselves, but as symbolic turf.

----------


## YesNo

As a theme for fantasy fiction, that view of the Devil might work. The only thing I know of the afterlife is what people who have had near-death experiences or people who have received after death communications tell me. I would count the events after the crucifixion of Jesus in those communications and experiences realizing the canonical accounts might have been modified for theological correctness. Personally, I imagine heaven and hell as a different, perhaps expanded, perspective on reality from our current perspectives which are very localized. 

I picked up Jeffrey Long and Paul Perry's "God and the Afterlife" in a used book store a few days ago. They research near-death experiences. Often I don't read the books I buy, expecting to read them sometime in the future and then forgetting about them, but you are encouraging me to look at this now.

----------


## desiresjab

I cannot find any necessity yet for God to be good. An excellent case could be made and has been made that God is evil. I want to believe God is good but I need some evidence.

I cannot expect any religion to have gotten the whole thing right. I think each got only small pieces, in some cases the same pieces, but very small pieces. The common thread was probably wishful thinking. No religion is likely to have come close to the truth.

* * * * *

The Bible is very big on touting the mercy of God. At the same time God shows hardly any mercy at all in the Bible. This very human God is jealous, angers easily, vengeful, and spills blood often. Not a strong recommendation for mercy. I do not presuppose any such qualities in the Primal Consciousness. Maybe it is not the primal consciousness, then, but one of those Gods that came about naturally through Scenario #2 which has no Primal Consciousness to be found that is necessary.

There can be only one cosmological excuse for the strict harshness of the biblical God, if he is as good as advertised--his own impotence at certain acts--it is the only way he can protect his children. If they play in the street he must spank them, though it hurts him. This means the devil is strong. Many people choose to believe in God but deny the devil. If biblical cosmology is true, there must be a devil, from the evidence in the world.

* * * * *

A more acceptable view to many might be that there is only a God and no devil, but that the world the Lord made was wrought with implict dangers and risks. Hence, accident as well as evil can carry a man off.

As anyone can see, proving the existence of God under certain circumstances, is a piece of cake compared to proving his nature. The proof God exists is but a dry thing with little inspiration in it.

* * * * *

Providing that Consciousness really is indivisible, God would be proven to still be alive, under Scenario #1, since the indivisible cannot be further destroyed. Most Christians put their faith in something like this.

God might be the sum total of consciousness in the universe, not just all the consciousness we observe. That it is all connected at some source is an earnest wish of many.

Myself, I care only about one thing--an afterlife. Without an afterlife who cares about anything, really? I do not really believe the afterlife has a price of admission per se, as advertised in the bible, but the threat of punishment was necessary to keep the kids out of the road.

----------


## desiresjab

I can think of only one reasonable explanation of the biblical God's apparent wish to be worshipped constantly--without constant vigilance we easily lapse back into our animal ways whereby Cain killed Able, etc. Again, not for him, but for us. That would be my rationale.

----------


## YesNo

There is no necessity to reduce God to any particular religion. So talking about God does not imply accepting Christianity or the Muslim religion or some Hindu religion or a New Age religion. All of these are ways to approach God. There are multiple perspectives on the Biblical God depending on the original sources of the text. 

I also don't think there is much of a case for a God that is "evil". That is because near-death experiences do not report such an evil God. 

I also don't think there is any point in pursuing the mechanistic Scenario #2 because quantum physics shows that reality is not mechanistic.

----------


## desiresjab

> There is no necessity to reduce God to any particular religion. So talking about God does not imply accepting Christianity or the Muslim religion or some Hindu religion or a New Age religion. All of these are ways to approach God. There are multiple perspectives on the Biblical God depending on the original sources of the text.


I already said I think all the religions together got only a little bit of the truth, sometimes ovelapping. What are you accusing me of? All these religions _think_ they are a way to approach God. So far they haven't done much approaching, from the evidence at hand. One thing about higher consciousness--it does not seem to be contagious. We will always be referring back to religions and human experience.




> I also don't think there is much of a case for a God that is "evil". That is because near-death experiences do not report such an evil God.


Well, I think there is not much of a case, either, for a God that is good. It seems pretty resasonable to me that if there are good Gods there might be bad Gods as well.




> I also don't think there is any point in pursuing the mechanistic Scenario #2 because quantum physics shows that reality is not mechanistic.


Is there a particular reason you think Scenario #2 has to represent a mechanistic reality? I know "mechanistic," smacks of the antiquated to you and is practically a dirty word. If existence was always here, how does that translate as necessarily mechanistic? I am not saying you are wrong, I just want to know what you mean by the word. Do you think something which evolves by itself is mechanistic because there would be no guiding hand of consciousness, so to speak?

* * * * *

Suppose for a moment that pipelines to higher consciousness do exist. I know that east Injun texts exist which speak in terms of trillions of years of existence, because a long time ago (ahem! relatively speaking) I read some of them. I am not sure if it was the Bahagavad Gita or something else. Man, that is nearing 50 years ago.

Trillons of years is thousands of times older than we scientifically presently suppose our own universe to be. The east Injuns have always been good at numbers. Were they just making up the largest ones they could write in these cases, or did they mean what they said?

----------


## desiresjab

To have an afterlife is no more unlikely than to have a conscious life. Under scenario #2, just as life _happen_s to be part of it, afterlife might also _happe_n to be part of it.

And what if "Things," are eternal but a master consciousness is also eternal? It seems tro me there could still be God under Scenario #2. And in this case an afterlife of some sort seems very likely to me. I sometimes think of ourselves and of all living things as the sensory organs of this master consciousness, who created us because it needs us and amoeba to experience life at every level. Now that would be a God to me!

An afterlife under Scenario #2 and Scenario #1 seems likely to me, because why would God make throwaway parts? Why would those consciousnesses finally maturing at what they do, be suddenly discarded? God would make permanent beings, especially if he is a good God, because reflective beings desire permanence more than anything else.

----------


## YesNo

Regarding this question: "I already said I think all the religions together got only a little bit of the truth, sometimes ovelapping. What are you accusing me of?"

There is no accusation. We agree that religions only approximate the truth. They are like someone trying to write out the digits of pi. Some write 3.14. Some write 3.1415. Some go further. Some write 3.212 and get it wrong. No matter how far they go, they are still approximating with these digits. Here's the problem: Is pi itself real? One can't resolve that by saying because 3.14 and 3.1415 are different or not exactly what pi is that pi is not real. The same with God. One can't complain about particular religions and claim that God does not exist. Furthermore, if someone were able to write out all the digits of pi, that would convince me that pi did not exist. Same with God. If some religious texts exactly represented God non-metaphorically that would prove God did not exist.

I don't know what Scenario #2 is if it is not mechanistic. In that scenario, if I understand it, there are a finite or infinite number of particles to which everything, including God and our consciousness, can be reduced with different arrangements of them. With infinite time by chance or determinism all arrangements will occur over an over again. The underlying problem is can such reductions be made? 

Last night I watched a set of six interviews on the observer problem in quantum physics: https://www.closertotruth.com/series/what-are-observers If we are talking about particles, we need to get quantum physics involved. The problem is that our only experience of a quantum particle occurs when we make a measurement and then it appears as a particle. We can only see quantum reality as particles. Furthermore, we can't predict exactly what will happen to any specific particle later, but we can give a probability distribution for what we might expect to see. That probability distribution is the wave function. If we could exactly predict what the particle would do there would be no mystery, but now there is this critical mystery: When we are not looking at a particular particle what is it doing? There are three general positions based on these interviews:

1) When we are not looking at the particle it is in a superposition of many possibilities. We are also in those superpositions and this creates a many worlds description of reality. See the interviews of Sean Carroll and Alan Guth.

2) When we are not looking at the particle it is in a superposition of many possibilities, but when we observe the particle those possibilities collapse into one definite particle, which is the only thing we can ever actually measure. This would be the Copenhangen or decoherence position. See the interviews of Laura Mersini-Houghton and Seth Lloyd and perhaps some of Paul Davies.

3) When we are not looking at the particle it has no properties to manifest. The wave function is only valuable for mathematical predictions. It is not reality. When we observe what a particle does it makes a choice. This would by my position. For something similar, see David Chalmers and perhaps some of Paul Davies.

For Scenario #2 to make sense it needs to fit one of those three interpretations.

Regarding this statement: "Well, I think there is not much of a case, either, for a God that is good. It seems pretty resasonable to me that if there are good Gods there might be bad Gods as well."

Perhaps we differ on perspective. I am not concerned with something being "reasonable" without empirical evidence to back it up. That is why I need the information coming from those near-death experiences, mystical experiences or personal experiences of my own subjectivity to tell me if there is an afterlife or a God. I won't trust my reasoning alone to get there without some empirical evidence to back it up. You may be a "rationalist". I am likely best described as an "empiricist".

----------


## desiresjab

You get stuck on notions, so I have to go against my speculative nature and play the rantionalist. Do you have some empirical evidence for a good God, good God? 

You say:

_When we observe what a particle does it makes a choice_.

It makes a choice, eh? You are like a religious person with this chant, then all you do is refer me to your bibles. Convince some people that quantum particles make choices. How many people have the professional convincers convinced? I am not going to plough through your references. If the people you have read did their job, you should be able to present the case to me.

I can easily improvise an argument that the particle in question did not make a choice. I say it was in a specific place all the time as a particle. Since we were not observing it all of that time, how are we supposed to know where it was or what it was doing? Quantum particles operate under a different set of rules than big objects. Just because we cannot predict their positions as if we were dealing with planets, does not give them consciousness or the ability to make choices. It simply means we are just beginning to understand the rules of the realm, and may be getting ahead of ourselves.

I don't know, and the above is my own improvised argument. You should be able to shoot holes in it. And you should be able to discuss it instead of asking me to plough through a hundred articles and books.

Observation itself may change quantum phenomena. It does not do much to the side of a barn when you shoot a beam of light at it. Photons are in the same size scale as the particles we are using them to observe. So, they will "blow," them around a bit, I suppose.

You keep making the statement, lad. Let's see the evidence now instead of the continual statement repeated. If you have strong reasons, present them. I am only playing devil's advocate because I have to. I am receptive to the idea of "cosmic consciousness," but not ready to state it as truth. You state it as truth. You claim you are empirically convinced. Let's go. That means evidence.

----------


## YesNo

In response to: "Do you have some empirical evidence for a good God, good God?" 

Yes, near-death experiences and after-death communications. They don't report an evil God.

In response to: "You are like a religious person with this chant, then all you do is refer me to your bibles."

I referred you to interviews of people with three very different opinions: (1) many worlds, the materialist perspective, (2) decoherence, the dualist perspective and (3) panpsychism, the idealist perspective. I disagreed with most of the interviewees. They aren't my bibles. I read them or listen to them to see where I differ from them.

In response to: "You claim you are empirically convinced. Let's go. That means evidence."

The evidence is partially in the near-death experiences. These are case studies. The quantum evidence comes from repeatable science experiments. My position is an interpretation of them. I think the idealist interpretation fits the problem better.

I don't think you should accept "cosmic consciousness" without understanding it. It is easy to get stuck in some New Age fog about quantum reality.

----------


## desiresjab

The near death experiences are not bad evidence, after a fashion, of course. I like it at least. You say they do not report an evil God. Do they report any God at all?

My biggest doubt comes from observing light bulbs blow, and realizing stars (of the right size) expand to a red giant before going out. One may take a split second and the other millions of years, but they seem like similar phenomena on one level of abstraction. There is a little light show from both before they expire. The brain may put on its own light show just before we die. The light show could even be culturally conditioned. For that reason it would be interesting to see the reports of people who came from non-Christian cultures. Are their experiences any different, I wonder.

What is needed is a way of inducing this state into humans we can later revive and get reports from after a prolonged experience. Performing such experiments would walk an ethical tightrope, of course. I can see such experiments providing valuable evidence, relating, possibly, to both dreams and space travel.

I believe astral travel is possible, and I believe experiments can be designed to test that. I tried it as a young man. But I got scared and backed out when it felt like it was beginning to work. Now I would fear for my old body more than my "soul."

The next paradigm in human evolution may well come from an arcane area like this, rather than pure scientific research in physics. This has been my feeling for some time. I believed it before I ever read it. Once that door of possibility is opened a crack, earnest research will begin on a large scale.

One of the drawbacks may be that financially valuable results could be scarce or a long time coming in this field. I think the U.S. administration at the present time is skeptical of even the value of the space program in general. They would be much more interested, for instance, in new allloys that might be produced under the zero gravity of space, than any speculative advances on our nature and origin. The human race has outlasted all administrations so far. Our angle of interest is a changing feature of us.

Some high powered minds like the mighty Brian Josephson have already made the switch to the future line of resesarch. They went southpaw. Probably only their great standing keeps the pitchforks and torches away.

Truth be told, this field of research will always be a magnet for charlatinism, sloppy experiments and results obtained because they were desired. Maintaining scientific discipline and distinguishing disciplined results from a huge tangle of less disciplined, will be key problems in the coming paradigm, as they already are now.

----------


## desiresjab

I forgot to add that I think it is a trend in physics today to name your theories after appealing human abstractions. In songwritung they are called "Hooks." 

_Relativity, String Theory, The Big Bang, Many Worlds, Cosmic Inflation...Particle Choices_--these are all hooks, carefully chosen names to draw people in, even those named long after their inception. The hooks are so powerful and compelling they gain admirers and become our favorite songs. Hooks will be an even greater problem in the future, I can foresee, making it more difficult to distinguish good research from good names.

----------


## YesNo

Some of the near-death experiences mention God and that God isn't restricted to one religion or no religion or a particular culture which is as one would expect it to be if it is true. I mentioned earlier a book about this: Long and Perry, "God and the Afterlife" that I skimmed recently. Other information is available at http://www.nderf.org/index.htm Having said that, I don't spend too much time looking at this research except to get a general idea what the results are. Religious groups have to come to terms with this evidence as much as atheists. As a panentheist, these results don't contradict anything that I think is true.

I don't know much about astral travel. I have found out how to see auras. They are easy and safer than astral travel. 

I agree with you that one has to avoid foggy results including the fog in established science with their "hooks" as you put it. For example, I don't see how an empirical scientist can even consider many worlds that no one can see as an interpretation for anything. A speculative science fiction writer might find it cool. As a reader I would find it boring. The same thing goes for black or dark stuff in the universe that no one can directly observe that keeps a current gravitation theory afloat. That there exists other gravitation theories that don't require these things is all the more reason to modify the current one and get on with it. However, developing experiments to prove or disprove any of these is worth doing. It improves our skills and knowledge. Part of my interest in looking at scientific results that I question is to ask what is the cultural motivation underlying these beliefs.

I am interested at the moment in "quantum computing". I don't understand what underlies it. It might challenge my idealist perspective but perhaps it doesn't.

----------


## desiresjab

After doing some cursory reading on Near death experiences, I find the following to be true:

A small percentage of people (anywhere from 1% to 25%) report hell and or demons in their near death esperiences. So perhaps there is some evidence for an evil God after all. The scientific rationale for NDEs is similar to mine. It is interesting that a person sleeping can detect a bright light shined at their eyelids and can give signals back to the experimentor while remaining asleep. My own problem with _Lucid Dreaming,_ and I _have_ had them, is that I wake up every time I become aware I am dreaming.

I don't know what my swami boys up in the Himalayas can do. Reports vary. When talking about this subject expect hyperbole and wishful thinking. The tiger swamis are supposed to live around tigers like house cats and command them. Recently we got a glimpse of their modern spiritualism when they were charged with selling tiger body parts on the black market. Maybe donations were down.

There is not a single human beyond corruption--Papal assistants, tiger swamis, TV ministers. Where does that leave the rest of us? Well, it leaves us without power, always a good place from which to begin a spiritual quest.

Apparently, Christians never see Buddah coming to pick them up in the taxi to heaven, it is always Jesus. Hindus never see Jesus coming. The experiences do seem to have strong cultural inflections.

* * * * *

The only way I can reconcile heaven with a merciful God is if you get what you believe at death. Those who believe in Christian heaven get that. Moslems get a moslem heaven replete with immaculate virgins. I hope there is sort of a library where Christians can check out virgins for a while, too.

That is the preparation for the afterlife we are in. We are here to imagine it so strongly that it shapes it beforehand at quantum level. It is our ticket. One way or another, we are leaving, but there are different destinations. All tickets are not the same. For all we know, only the imaginations of the devout work hard enough to put some extra shape on their afterlife. Other peoples' occasional musings and vague beliefs may not be enough to transform the quantum architecture of a generic afterlife into something more special, which could be the whole point of religious devotion.

----------


## tailor STATELY

As a man of faith I've been enjoying your recent interchange of ideas and comments and keep coming up with a remembrance of "The King Follett Sermon" by Joseph Smith, Jr. (First President of The Church of Jesus Christ of Latter-day Saints). The "Sermon" isn't canonized by the church, hence Mormon literature, but offers insights to the character of God as revealed to Joseph... http://mldb.byu.edu/follett.htm I've been reflecting upon the "Sermon" and its consequences for years and continue my study within the canon of LDS scripture.

Ta ! _(short for tarradiddle)_,
tailor STATELY

----------


## YesNo

Religions that have an immanent and transcendent view of God would be "panentheistic" by my view of the word, however different their practices or texts may be. That includes Christians, Hindus, many others and pagans and even some atheists who acknowledge their own subjectivity which is hard not to acknowledge. One thing I disagree with Joseph Smith's writing is this which comes from John and is similar to the beliefs of other Christian religions: "This is life eternal"--to know God and Jesus Christ, whom he has sent." The only part I disagree with is the implication that this is the only way. Life like ours on other planets will not know Jesus, nor will such life in other universes. This can't be the only way. 

I don't see anything other than that to disagree with because I don't know enough about it. He did mention something interesting about the Devil: 

"The contention in heaven was this: Jesus said there would be certain souls that would not be saved, and the devil said he could save them all. The grand council gave in for Jesus Christ. So the devil rebelled against God and fell, with all who put up their heads for him." http://mldb.byu.edu/follett.htm

That brings up the idea of hell that desiresjab mentioned. Some people do experience hellish near death experiences. I don't think that implies God is evil. Nor do I think that implies there is an eternal hell. Long and Perry have a chapter on hellish experiences in "God and the Afterlife". Long, I assume, wrote, "I never read an NDE describing God casting the NDEr into an irredeemable hellish realm." (page 171) He speculates that they would be there because of "very poor choices" and they "have the free will to both make good choices and return to the heavenly realms". 

Regarding cultural influences on what those having an NDE saw, he asked them "Have your religious beliefs/spiritual practices changed specifically as a result of your experience?" 73 percent said they had. (page 189) They may go into these NDEs with a cultural bias, but many come out with a changed perspective.

----------


## tailor STATELY

> Life like ours on other planets will not know Jesus, nor will such life in other universes. This can't be the only way.


 The doctrine of my faith teaches that our Savior is the Savior of all worlds. A poem by Joseph Smith, Jr. that resonates for me:


> For the Lord he is God, and his life never ends,
> And besides him there ne’er was a Saviour of men. …
> He’s the Saviour, and only begotten of God—
> By him, of him, and through him, the worlds were all made,
> Even all that career in the heavens so broad,
> Whose inhabitants, too, from the first to the last,
> Are sav’d by the very same Saviour of ours;
> And, of course, are begotten God’s daughters and sons,
> By the very same truths, and the very same pow’rs.”
> (Times and Seasons 4:82–85.)


... a link to one of my favorite hymns: http://www.timesandseasons.org/harch...-kolob-lyrics/

----------


## YesNo

I can see how that would be the case because the divine is "one", but other cultures may use other names and practices to approach the one divine. Although I don't think Christianity is the "only" way to the divine Christianity is still "a" way to the divine and there is no need to convert to something better.

----------


## YesNo

I have been re-reading John Moffat's "Reinventing Gravity". I am at the part about his theory of the variable speed of light at the beginning of the universe. He rejects the various inflation theories and adjusts Einstein's special relativity so that the speed of light is not a constant. This allows for the universe to be homogeneous without invoking inflation. 

One of the ideas that I found interesting is the idea of a "bimetric" separating the speed of light from the speed of gravitational waves. These two would vary between themselves to avoid inflation in a different way from the variable speed of light theory he originated above. Generally it is believed that there is one metric, the speed of light, which is constant and gravitational waves travel at the speed of light.

----------


## YesNo

Moffat mentioned that he is not the only one who has promoted the variable speed of light in a vacuum as an alternative to inflation to get the universe into a homogeneous state. More generally the variable speed of light in a vacuum has been considered by others. Here is a survey of these ideas: https://en.wikipedia.org/wiki/Variable_speed_of_light

----------


## YesNo

I just came across another survey article at a deeper level than the Wikipedia article by Joao Magueijo who Moffat mentioned. It was written in 2003, older than the Moffat summary I am reading written in 2009: http://cds.cern.ch/record/618057/files/0305457.pdf

Also it looks like a test of this may be underway perhaps to complete in the next five years with improved measurements of the "spectral index" for which they made a prediction based on their theory: https://www.theguardian.com/science/...soon-be-tested This article is less than a year old.

----------


## YesNo

I have nearly completed a second reading of Moffit's book. I've come to realize that there are many people who are looking for modified gravity theories because dark matter has so far not been found. One has to do one or the other: modify the gravity theory or find dark stuff.

One blog I found interesting was Sabine Hossenfelder's http://backreaction.blogspot.de/2015...ith-black.html Here is the archive header for the paper she references: https://arxiv.org/abs/1502.01677 The Event Horizon Telescope may be a way to falsify either Einstein's general relativity or Moffit's MOG. Here is an update of the project: http://eventhorizontelescope.org/blog/eht-update

----------


## YesNo

I found out that John Moffat has a more recent book (2014) on the Higgs boson and it is in a local library. He writes very well. Maybe he'll help me figure out what that boson is.

----------


## desiresjab

If there is a God, and if God knows our future, does not his knowing then preclude our having free will? For if God knows, then it is predestined, is it not? And if it is predestined, our sense of free will and choice is illusory, is it not?

Would God then have made a universe whose future he could not read? Or could he read it if he chose to but simply has the will power and the character never to peek?

Isn't it the position of _several_ major world religions that God knows everything, including the future? I think it is safe to say this was/is the position of many Christians I have known quite well. I cannot remember any scriptual support for the position right now. Maybe there is some.

Anyway, _several_ of the world's major religions believe God is ubiquitous and all-knowing. But it seems to me this idea might be inimical to the idea of free will. Am I wrong?

----------


## YesNo

If you assume God knows our future exactly, then you have assumed we have no free will.

However, if we do have some free will, then he doesnt exactly know our future. 

Can one reconcile an omniscient God with one who does not know more than probabilistically what we will do in the future? I think one can. If one defines omniscient to be knowing everything there is to know, then God would be omniscient and still not know exactly what we will do. We have our free will and God has his omniscience. 

I dont speak for any religion. I am sure some religious people think we have no free will because God knows everything (more than what there is to know). However, I think that leads to a contradiction. Not that it really matters since a religion is about establishing a relationship to God, not obtaining philosophic knowledge.

----------


## desiresjab

The phrase *Some free will* is curious. My belief is leaning differently. I believe we may have no free will but are asymptotically close to it, so close we cannot tell if we are free or puppets of fate.

The two phrases *Some free will[* and *Asymptotically close to free will* may be trying to express approximately the same idea. But either leaves me with no idea what God is allowed know so that I may still have free will. Very tricky of God, I must say.

God may have built the discoveries of Godel into the universe. Free will is one of those questions of which we cannot even decide if it has an actual answer or does not.

One question to ask ourselves is whether we ourselves could build something whose future would necessarily remain shrouded in mystery to us? If we can do that, it is easy to assume God can too.

One might naturally argue that we built this country and its future is unknown to us. God made the dirt, we cultivated it and gave our patch its own name.

----------


## desiresjab

> If you assume God knows our future exactly, then you have assumed we have no free will.
> 
> However, if we do have some free will, then he doesnt exactly know our future. 
> 
> Can one reconcile an omniscient God with one who does not know more than probabilistically what we will do in the future? I think one can. If one defines omniscient to be knowing everything there is to know, then God would be omniscient and still not know exactly what we will do. We have our free will and God has his omniscience. 
> 
> I dont speak for any religion. I am sure some religious people think we have no free will because God knows everything (more than what there is to know). However, I think that leads to a contradiction. Not that it really matters since a religion is about establishing a relationship to God, not obtaining philosophic knowledge.


Sounds like an opinion to me. The opinion of some would be that religion is about controlling the populace and always has been. The belief you expressed would be exactly what the controllers want the controlled to say.

Besides, how many of the major religions are about having a personal relationship with God? Buddhism is not about that, and I am not sure Hinduism is either. I have never heard any Moslem speak about achieving a personal relationship with God.

It seems this idea of religion being about a personal, loving relationship may simply be because you are a westerner raised in a religion where that happens to be the rare case.

I do not know this for sure. I would like to know what others with more experience and reading in religion have to say about this. Is religion in general really about a personal relationship with God, or is just ours?

Let us not play the silly game of calling any interaction whatsoever a personal relationship. The phrase means more than that. It means something specific. I am not sure that applies to all other religions or even a majority of them. There are probably some people with strong ideas on this on here. I would like to hear their opinions. Not likely that I will, I have found, but I would like to anyway.

----------


## Kate23

I consider myself to be a cosmologist as well, as everything that is happening in the world has an impact and depends on the Universe. Everything which is around us, and we ourselves, cosist of energetic particles, every little thing in the Universe has the same inner structure, that is why you cannot deny the idea of reincarnation, as energy never comes from nowhere and never goes anywhere, it can only transform into a different object.

----------


## desiresjab

Dwell on a picture and you may start to see things within it you had not noticed before. Some pictures are made that way purposely, some just contain that potential by accident. You can do the same thing with philosophical concepts. If I dwell on the idea of death long enough it seems not to be the end. I can almost see more. The vision is so murky I cannot be sure what I see, yet the impression is quite strong. It averages out to more an intuition of something, rather than a clear picture. A strong intuition says there is more after death. Can't prove it, cannot even convince you. Near impossible to describe. Still, something is trying to become clear. Not sure how to let it, or if there is a way to improve the image.

----------


## YesNo

That reminds me of the "contemplation" Plotinus wrote about. It's a different way of seeing reality. Shimon Malin discusses him and Whitehead in "Nature Loves to Hide" as well as this other way of seeing.

----------


## desiresjab

> That reminds me of the "contemplation" Plotinus wrote about. It's a different way of seeing reality. Shimon Malin discusses him and Whitehead in "Nature Loves to Hide" as well as this other way of seeing.


At first, the surprising thing seems to be that human beings are still here. Of all the ill-equipped who would not be likely to survive--but here we are anyway. Before this, no single man had the power to destroy mankind, or at least civilization, in total. That was our saving grace. Ninety wiped out here but ten survived scattered elsewhere. Those were the kind of odds we kept beating. For what? A miserable and a short life, buried in our own feces until the last century. We endured millennia of discomfort for this we call life now.

I first find it surprising that we ended up here. I next find it surprising that we survived, and even seemed to flourish. The compound probability of this trio of surprises together nods toward the belief that the universe is not pointless after all, that unlikely things may be happening because there is a will for them to happen.

----------


## YesNo

It does seem unlikely that we are here at all.

----------


## desiresjab

Looking at us, that we made it is really shocking. We all know that many societies were wiped out courtesy of another. One reason we made it is that everyone was not connected yet. The world was full of little feudal fiefdoms disconnected from each other. Villages. Tyrants with the disposition to wipe out the world had not the means. But they could play hell with their neighbors.

Archimedes could figure out the volume of a sphere, but man did not yet know what to do with his feces. Pooping into a hole in your floor into a rivlet running beneath was the ancient equivalent of indoor plumbing

----------


## YesNo

At least they had a nice view of the river.

----------


## desiresjab

Not too nice for the folks downstream where the water slowed and turds clogged on bushes and fouled embankments. They are still doing that and worse in India and other places. Public squatting is a tradition. I once figured out what the pile of untreated human waste from a single day globally would look like gathered in one place. Make um big heap.

* * * * *

I read or saw somewhere that a famous cosmologist said the entire universe might be up to 1023 times more extensive than what we know of.

00,000,000,000,000,000,000,000. Well, up to 99 sectillion times larger. I would call that a fairly extensive place.

----------


## YesNo

As long as it's not infinitely large we have hope to get from one end to the other.

----------


## desiresjab

Its size suggests its potential diversity.

----------


## YesNo

This diversity is quite large. Given enough data it might not be possible to have a relativistic and deterministic gravitation theory. That would be something worth trying to show. Indeterminism would not only be at the quantum level but at the gravitation level as well.

----------


## desiresjab

Now for a tough question: Is infinite diversity possible in a finite universe?

----------


## YesNo

I would guess not, but I don't know. Assuming the finite number of things are isolated from each other so no infinitesimal distances either.

----------


## desiresjab

Let's cut right to the chase. Can a finite universe be infinite? That is one thing we are interested in knowing. We can construct finite universes in our minds which have some aspects of infinity. We know that a coastline has some aspects of infinity, therefore we call coastlines infinitely long in fractal geometry. But once the surfaces are too small to reflect a photon the mathematics keeps right on going like there is somewhere to go. This may be wishful thinking.

----------


## desiresjab

Infinite diversity must be called uniformity!

----------


## YesNo

Uniformly diverse.

----------


## desiresjab

> Uniformly diverse.


Correct...it swallows its own tail.

----------


## YesNo

Omnivorously diverse.

----------


## desiresjab

Paradoxes may merely be vertices on the boundary of the artificial reality in which we find ourselves embedded. Around the edges of our "universe," and only there, might traces of the imperfect and fictional nature of our simulated reality become evident to a few skeptical outcasts among us. It is not inconceivable for someone in the future to prove that our reality--our universe--is artificial. What does that mean? It means a construct. Certainly there are those who will argue that no construct can ever be complex enough to represent parts of reality we already know, such as consciousness, for instance. Their doubt comes no closer to constituting a proof than my lack of doubt does. What the above also means is that we would have to accept that it is we who are the artificially intelligent life form. If we were created instead of occurring entirely randomly, then we are artifice, and not our own.

----------


## YesNo

If we were created we could be called artificial from the perspective of the creator.

----------


## desiresjab

> If we were created we could be called artificial from the perspective of the creator.


Yes indeed.

At this point a created universe like ours "seems," so much more likely than a randomly generated one that it is frustrating to make so little progress demonstrating it.

One has to suspect that even without a God or conscious sub atomic particles to assist it, mankind is on the road to immortality. Lifespans will grow longer. Then man will learn how to prolong a cyber essence indefinitely which can synthesize experience. 

We were born too soon. It may only be later generations which get in on the immortality act of science and mankind. People of the future would hate to be born right now. How much would you have hated living in pre civil war America, even? Not even knowing there were other galaxies; Not even knowing the age of the world; Not even knowing the age of the universe; Not even knowing how to hygienically dispose of your feces en masse; Only having conquered darkness with whale oil; Advanced transportation was a good horse and buggy.

But worse than all of the above were the backwards notions on everything from race relations to religion to education one would have encountered. A sense of mystery was still there surrounding such phenomena as the pyramids. But when you look at their overall understanding and overall standard of living & development, one wipes one's brow that it was them and not us. For we could easily have been born into a more ignorant and backwards time.

That is exactly how men in the future will see it, and how they will see us. "No thanks," would be their reply to living in our era of backward ignorance. They themselves will live thousands of years, or longer, and be able to do things now considered worthy of only pure fantasy fiction.

----------


## YesNo

I suspect there is also a way, say through Plotinus's creative contemplation, for creation to occur without it being artificial. It is not really a making of something.

----------


## desiresjab

> I suspect there is also a way, say through Plotinus's creative contemplation, for creation to occur without it being artificial. It is not really a making of something.


A willful creator is more likely than random interaction of "unbiased," particles. If we allow highly biased particles in our universe, then we are already half admitting that there was some kind of "help," beyond randomness assisting on the job of creating life and matter, making it somewhat easier and somewhat speedier to have these things.

I believe trouble comes when one tries to shut out any kind of bias. Particles that were not biased toward anything would never do a thing that was permanent. Particles of the universe seem biased already, just by the fact that we have something rather than nothing at all.

That is the trouble, I believe, with shutting bias out, or trying to--it is unrealistic, particles of the universe are already biased.

Without bias, not enough time has passed for this universe to be here, it seems to me. Totally without bias, I do not see how a universe of particles could get built or stay built, especially in a mere 13.72 billion years. The harder I look at it the more obvious it seems that there had to have been help from some kind of bias for it to get done in that amount of time across that amount of space.

Do you see how much more likely it is on a strictly probabalistic basis?

Is carbon really unbiased? I do not think so.

----------


## desiresjab

For that amount of organization (the universe and us) to get done across that amount of space and time (13.72 billion years), there had to have been bias, I believe. 

Stated differently, as three facts: (1) Even the small corner of the universe we are familiar with is quite vast, but still finite; (2) 13.72 billion years is a puny amount of time; (3) we are quite complex. The three facts do not go together, but there they are, and here we are.

One must admit, 13.72 billion years is very little time for unbiased matter to get down to the randomly occurring business of creating life and consciousness in all its complexity, is all I am saying in this post and the last, I am not sure how well.

----------


## desiresjab

What more of Plotinus's creative contemplation can you say? I expect that is what God did. But we are still artificial, aren't we?

----------


## desiresjab

The way we are using the term _unbiased_, neon and argon would be unbiased particles. Unbiased particles have no proclivity to mix with anything. In reality, most of the elements of our chemistry are gregarious and biased against non-interaction, as we know from high school. Our philosophical contention is that if elements did not "like," to socialize, there certainly would not have been time for complex life to develop already. Particles come pre-made with the proclivity to socialize. They did not have to come that way. It did not have to be that way, but it is. By and large, particles are quite gregarious. Now how could anyone refer to that as unbiased?

I have to wonder what other proclivities particles might come with.

----------


## desiresjab

Self organization might be another proclivity of particles.

----------


## desiresjab

Whoops, wrong thread!

----------


## YesNo

I don't think particles are unbiased, that is totally random, either. There is an idea of something having a "disposition" to behave one way or the other. It is different than being deterministic. 

I don't know much about Plotinus. I am reading some of him now at http://www.sacred-texts.com/cla/plotenn/index.htm There is also a survey article at SEP: https://plato.stanford.edu/entries/plotinus/ I found out about Plotinus (and Whitehead) by reading Shimon Malin's "Nature Loves to Hide". Malin is a physicist writing about the quantum collapse of the wave function. His book is one of the clearest I have read about quantum physics.

----------


## desiresjab

I can't get my head around _random_ anymore, either.

_Disposition_ is a very good word in this context. One might even put in a little work describing exactly what _disposition_ entails. The universe and matter only have to possess disposition for randomness to be escorted from the cosmological party.

One cannot deny the value of the concept of randomness, however. It has proven of immense scientific value and will probably continue to do so. A more refined concept which has figured out how to acknowledge the influence of disposition would probably be part of the new paradigm.

----------


## YesNo

The main benefit of random seems to me to be for statistical work. I don't think reality has anything random in it. Much of what we don't know, like will the market go up or down on Monday, depends on a lot of choices people make, not something random. We just don't know and so think of it as random, or unknown but maybe predictable to some extent.

----------


## desiresjab

The dispositions of various types of matter towards one another could be much different, it seems. Most matter could have the disposition of noble gases, an unwillingness to mix.

But since the general disposition of matter is to mix, that already does not seem neutral to me. Neutrality is needed if one is going to tout matter and the creation of life as having happened at random. We should have known that easily. What took us so long? We would not be here to figure things out in a universe where matter was more noble. We should have cut to the chase. We are here; the universe is not noble; the universe cannot be neutral; the universe is already out of neutral and in gear.

----------


## desiresjab

I will try to investigate whether a created universe is more probable than an accidental one. Please bear in mind the subject is a difficult one for me where solid purchases are rare. Sometimes it consists merely of fleeting epiphanies so brief that details escape before words can cage them. If it sometimes seems as if I do not know what I am talking about, it is because I usually do not know. I have intuitions, which I am trying to organize usefully. On some days we will likely need our micrometers to search for any progress that might have been made.

Now it is true that most blades of grass and most trees and most flowers are not planned. But this is not "as," true if a partially obscured "disposition," is at work in grass and trees and flowers, and for that matter, stars.

One of the first things we need to do is dehumanize our terms. _Disposition_ needs to become _proclivity, propensity_ or _potential_. We do not at this time need to posit that matter has any kind of inherent consciousness. That would only give us something else to defend. If we arrive at it in our deductions and musings, that is another matter.

It is not a one post job. Let this far serve as an introduction. Now I have to walk up to the lookout with my binoculars.

----------


## YesNo

I think the kalam cosmological argument that William Lane Craig promotes is a valid argument. This is a rational proof that the universe had a personal Creator. Where it can be challenged is in the second premise claiming that the universe had a beginning. If the universe actually had a beginning, that is, something like the Big Bang actually happened, then the kalam cosmological argument would be a proof for the existence of God. It is based on Al-Kindi and Al-Ghazali philosophies which are based on Plotinus and Plato.

----------


## desiresjab

Since I am trying to "prove," or disprove the existence of God myself, I am interested in any other proofs. Your references were not clear to me. Do you have a link?

My first effort will simply be to try and demonstrate that a created universe is more likely than a random one. By created, one is allowed to mean anything but random.

I just lost a long post I cannot seem to recover. Dang it! That is the advantage of little bits.

----------


## desiresjab

Any help at all in coming about or developing as it did precludes a random universe. For things to come about with no proclivity to come about, is absurd on the face of it. The less the proclivity exists for things to act in a certain way, the lower the probability of it happening.

_Universal stuff_ with no proclivity to get together and make other things has a probability of creating the universe that asymptotically approaches zero. Proclivity, that built-in potential, is the reason we are here observing at all.

----------


## desiresjab

On the verge of horizons, we proceed.

We have seen that the connections between matter & matter are multifarious. Chemical combinations (for instance) are too numerous to be catalogued, and are in fact infinite.

With the right set of eyes nature is seen to be very busy at all times in most places. There is all kinds of commerce and trade in the chemical world (staying with the analogy) for instance. This commerce is normal and natural, not a special circumstance, as if things were made to work in combination.

Our universe seems set up for activity, just as a universe of noble gases would seem set up for inactivity.

We feel a "bias," in our set up towards activity and new combinations. We feel this same bias made the creation of life not only possible but a sure thing in our universe, given enough time. 14 billion years is not long enough to the intuition, however. Even more time should have been required to produce simple life and evolve it to complex life with minimal consciousness and then evolve that consciousness to the high self consciousness of man.

Not only is 14 billion years a short time in this context, but consider that the process of development was arrested and almost wiped out at least several times in mass extinctions. In other words, the process of getting to where we are now went super fast, almost copying cosmic inflation itself, and happened in spite of massive setbacks. Such success smacks of a proclivity for those things we are counting as a success.

----------


## desiresjab

An extraordinarily rare event? I don't think so. The world kept returning to life, rather nursing it back strong, after each mass extinction. The way stellar nebulae are natural nurseries for stars and other cosmic misfits, the world is a natural nursery (the only one we know of) for experiments in animation.

----------


## desiresjab

Once you heat up gases rich with incidental elements things start to happen. That is our universe. There are many heat sources. Gravity collects the gases and they heat up under the continued action of gravity and a few basic laws.

The honest investigator is not allowed to let it go with only a note that our universe seems hugely biased toward activity and creation, compared to other universes we can easily imagine, that is. This observation must be addressed. It must be dealt with.

It is significant that we have to admit to living in a biased universe. Our universe creates things all the time--even space, for new space is being created all the time for the first time as our universe expands. It is not expanding into what was formerly empty space, but what was formerly nameless nothingness. There was no space there. There was no there at all.

We have to ask ourselves why this is so. Do we actually live in a universe where runaway creation is the order of the day? If this is the case there is no reason to assume life is not merely the tip of the iceberg of the possible. In a universe geared for runaway creation, one should not be surprised if afterlife is part of the deal, too, since we already know life is, and life is pretty strange itself, everyone can probably admit.

The right conditions for life to kick up are scarce and scattered, but not nonexistent. It will not occur too early in the universe, for we will need some iron first, to be supplied by spent supernovae exploding and disseminating heavy stuff. After gravity collects the materials into a hot soup, the brew cooks and cools for a long while, the heavy stuff sinking to the center of the mass where it will become the magnetic iron core of a planet. Life as we know it must have a protective magnetic field, provided by its spinning molten core.

We have to suspect that our universe is open to other experiments in integration, not just on the chemical plane of our analogy, but also in areas where we are not equipped to observe the activity the way we have taught ourselves to observe chemical activity, and where we have no valid reasons for suspecting that kind of activity in the first place.

To some, our universe was _made_ that way, and to others it _just turned out_ that way. The common point is that _it is that way_.

Just because we have shifted the mantle of creator from the shoulders of a mysterious being to the universe itself, does not mean we have escaped the hard questions. 

We have to ask ourselves if it was a random accident that the universe turned out this way, or was it purposeful?

The universe has become a great creator. Did it create itself? Can something which does not exist yet create itself anyway? Why does the universe have those biases we can easily observe it to have?

We should rule out anything creating itself before it even exists. The universe did not create itself. Doesn't that feel better?

Whatever created the universe created it with certain biases. Were these biases purposeful, or were they accidents. Were they inevitable? Why?

As we can see, there is no escape just because we now might now admit certain proclivities in matter to be responsible for life. We are clearly obligated to shift our attention to the proclivities and explain them as well as we can under our paradigm of randomness, or admit the universe had a separate creator who handed over the mundane duties of creation to the universe itself.

----------


## YesNo

I don't see how it could create itself. I think the cause of the creation was some agent not an event. That is there was a purpose involved.

----------


## desiresjab

> I don't see how it could create itself. I think the cause of the creation was some agent not an event. That is there was a purpose involved.


It would have to precede itself to create itself, a clear impossibility to our minds.

However, if the Hubbleverse is not all there is to the universe, the universe could be infinitely old already. If an eternity has already passed--correct me if I am wrong--but isn't that the same as saying that everything that could ever possibly happen has to have happened already, and we can only be living through a repeated portion?

Is that a logical problem if we consider the universe to be infinite in age? An uncreated, non-deterministic yet non-random universe could not produce an event that had not already happened. I think that is one ineluctable reality of an infinitely old universe.

Well, there is nothing in me which demands that events surrounding me--nor even my own experiences-- be new. Given long enough, events and experiences would repeat for a spell and maybe even forever, like the team of monkeys expected to type Hamlet. But if things repeat, isn't that determinism again, rearing its ugly head once more? There is this uncomfortable logical quandary when we consider the universe to be infinitely old without a beginning. I am not sure I can get out of it.

But wait! I just realized nothing in me demands or requires Free Will, either. I like the sound of it all right. That is why I did not want to give it up. But it is not as if my philosophy must have it or I cannot be satisfied.

I still like the idea that we possess a simulated Free Will _asymptotically close_ to the real thing. 

I suspect there are many shades of consciousness. What are some physical phenomena which might be conscious activity rather than the mechanical or random processes we think they are?

----------


## YesNo

I tend to think there are only two general kinds of causation--event causation and agent causation. If an agent does it there was some freedom involved. I don't think there is any really random stuff happening, not even quantum collapses.

----------


## desiresjab

Sometimes we have to separate concepts in our minds to see what we have so far, cease our backward extrapolating for a moment and appraise. We think of perspective and of angles subtended. At the beach what angle should a gull subtend a certain distance from some observer? We know an answer only if we assume the observer is human. No one said the observer could not be an eagle, whose vision can operate at magnification power 3, meaning of course, a larger angle subtended at the same distance.

Now think about the question (How conscious is it?). At what distance in time were primate ancestors conscious enough to be human? Is the answer _only when they could ask this question_? Not they, just one man or woman. If one man asks that question, all men everywhere become immediately human, even those in the middle of murdering someone who finish. 

Seeing is to vision as thought is to consciousness. Many species appear to worry, but only humans worry about an afterlife.

How large should a star appear? _No size at all_. How large should anything be at any particular distance? _No size at all_, is the correct answer. Can you apply this answer to the concept of consciousness?

----------


## desiresjab

There are people who believe explicitly in God, at least they claim they do. To them there is no sloppy overflow in God's natural universe. God does not do anything approximately, but everything precisely, they believe. Everything God does and has created is real. Creation is part of the definition of real after all, when you think about it.

If God has a purpose in everything, what is the purpose of optical illusion? Why was a choice made to give us senses that are often unreliable? Surely, God could have done it differently if that being has only a portion of the control over the universe attributed to him by his faithful. To say that God works in mysterious ways, is the ultimate cop-out and an explanation of nothing. It explains that you do not have an answer, or anything close to one.

For years I have suspected that Buddha got very close to the truth with his notion of Maya. It seems to me now that there is a lot more to optical illusion than the little bit one finds in entertaining books on the subject. Formal optical illusions presented in books cannot be controlled or prevented when we follow directions and look at the right spot, etc.

Does God really have some interest in fooling and testing people. That idea seems awfully old fashioned to me. In my early life there was an aunt who insisted that even dinosaur bones were something put there to tempt man from God's words.

I do not know if God has an interest in testing us, but from all the evidence, it seems said entity does have an interest in fooling us, otherwise why give us senses that cannot be relied upon consistently?

If there is a God, what then is the purpose of optical illusion? I suspect that a great deal of what we experience is an illusion of one variety or another, just not the kind slick and obvious enough to include in a coffee table book on illusions and paradoxes. But what is the purpose, then? What is a human supposed to take from a world whose content is so steeped in illusion? What is the lesson? Why illusion instead of direct truth, if you were God?

----------


## desiresjab

I can compare reality to a card trick, i.e. an illusion. The hidden top card is called reality. However, you can see I am going to cheat. The edge of the second card is showing, and that is the one I intend to turn over. If you had not seen that small edge protruding, you would have been cheated without a clue and never been any the wiser. But you did see that edge. Sometimes in real life we see that small edge too. We can tell then that what we are seeing is an illusion. Without that edge we would be none the wiser, and most of the time we see nothing but what we sincerely believe to faithfully represent reality.

We see no small edges protruding, so we assume we are viewing reality and not an illusion. That seems after all like a poor reason for judging something real. Did you ever have the feeling that illusion is operating all around you constantly and you are unfortunately ill-equipped to prove it?

Go to any popular internet site of illusions. They show you how to recognize an illusion--as far as possible, that is. They show you the edge, then you can understand the trick. What about all the times we do not see an edge at all but are still viewing an illusion? Did you ever have the feeling these are very common events and not rare?

----------


## svejorange

Cosmology is deep and wide topic, thank you for waking it up!

----------

