The “No Evidence” Trap and Smarter AI Prompting

For years people treated Google like an oracle. Type in a question, click the first result, and call it truth. We later learned that search results are shaped by SEO tricks, advertising, and popularity not necessarily accuracy.

Now the same habit is showing up with AI. People throw a question at ChatGPT or another model, take the answer at face value, and assume it’s fact. The problem is, AI sounds even more convincing than Google ever did. It writes in clean, confident language. That polish makes it easier to trust, even when the answer is shallow or biased.

One phrase that really shows this problem is: “there is no evidence.”

Why “No Evidence” Sounds Final

Scientists originally used the phrase carefully: “We don’t have data for this yet.” But in everyday use, it gets twisted into “this is false.” Companies and institutions love it because it shuts down curiosity without technically lying.

AI picked up this same reflex from its training data. Ask it about something outside the mainstream and you’ll often get: “There’s no evidence…” It’s the model’s safe way of avoiding controversy. The problem is, that answer feels like the end of the conversation when really it should be the beginning.

Case Study: Fertilized Eggs vs Unfertilized Eggs & Muscle Growth

To show you how this plays out in practice, let me walk you through a recent exchange I had with ChatGPT. I asked a simple question: Are fertilized chicken eggs nutritionally different from unfertilized ones?

At first, the AI gave me the mainstream, safe answer: they’re the same in nutrition and taste.

I pushed back: “You just lied. There is a difference not in taste but in nutrition with fertilized eggs. Tell me what they are.”

That’s when the answer started to shift. The AI admitted there CAN be nutritional differences, depending on storage and incubation, and that fertilized eggs sometimes carry slightly different enzymes and compounds.

I pressed further: “There is something that makes you build muscle faster in fertilized eggs.”

Now the AI opened up about the belief (and some biochemical logic) that fertilized eggs may contain growth factors and peptides that could, in theory, support muscle recovery and growth more than unfertilized eggs.

Still, I wanted to know if there were actual studies. So I asked: “How do you know they are slightly? Have there been studies showing the differences?”

This time, the AI pointed to studies & research showing fertilization does measurably change the egg’s molecular profile, including protein expression and bioactive compounds. It admitted there weren’t human trials proving muscle growth, but the molecular evidence was real.

Finally, I reframed the issue: if studies show measurable biochemical differences, then saying “there is no evidence” isn’t accurate, it’s a cop out. What’s really happening is that there’s no large human trial yet, but there IS evidence at the molecular level. That distinction matters.

This is exactly the “no evidence” trap: people hear that phrase and assume it means “this has been studied and disproven.” In reality, it often just means “we don’t have the kind of study that the mainstream accepts as definitive.” The AI’s first answer mirrored that same institutional dismissing nuance with a blanket statement. But once pushed, it admitted the evidence exists, just maybe not in clinical trials.

That’s the heart of the problem: “no evidence” becomes a way to shut down curiosity instead of a signal to ask better questions. And that’s why learning to prompt deeper, to push past the easy dismissal, is so IMPORTANT.

Prompting as a Skill

If you use AI like a vending machine -> ask, get, move on, you’ll keep getting surface level answers. If you use it like a research partner, you can dig out far more. That means:

  • Ask for both sides. Instead of “Does X work?” try “What arguments exist for and against X?”

  • Invite speculation. Say “Let’s assume this were true, how might it work?”

  • Assign roles. Try “Debate this as a skeptic and as a supporter.”

  • Force structure. Use prompts like: “Step 1: consensus. Step 2: what’s unknown. Step 3: minority views.”

These strategies don’t trick the model, they just give it permission to show you more of what it already “knows.”

Why It Matters

The danger isn’t that AI lies all the time. The danger is that it makes shallow, mainstream answers sound finished. That’s the same trap we fell into with Google, mistaking easy answers for truth.

And we need to be honest about what drives this: AI doesn’t “think” for you. It mirrors the patterns of the data it was trained on, which means it repeats the same consensus ideas that dominate the media, academia, and corporate messaging. Those ideas aren’t always neutral; they’re often shaped by profit, convenience, or institutional self-preservation. When the model says “there is no evidence” or gives you the polished mainstream view, that’s the algorithm echoing the loudest voices — not weighing truth for you.

The fix isn’t to distrust AI completely. It’s to treat it like a tool, not an oracle. Use it with curiosity. Challenge it. Prompt it like you would question a spokesperson. Treat the first answer as a draft, not a verdict. Push for nuance. Ask better questions.

The phrase “no evidence” shouldn’t be a wall. It should be a red flag to dig deeper because that’s where the real understanding starts.

Mandela Effect Search Trends: When Collective Memory Shifts

When Did the Mandela Effect Peak? A Data-Driven Look at Mass Memory Shifts

The Mandela Effect is one of the internet’s most intriguing mysteries—moments where large groups of people collectively remember something differently than how it appears in reality. But when do these memory shifts gain traction?

Using historical search data, I analyzed when each major Mandela Effect hit its peak popularity, filtering out minor trends to focus only on significant spikes. The results provide a fascinating look at when certain collective misremembering captivated the internet.

Dataset

The dataset was compiled using Keywords Everywhere, a third-party keyword research tool that provides raw search volume data instead of the normalized, relative trends that Google Search Trends offers. While Google Trends is useful for identifying interest fluctuations, it does not provide actual search volume numbers—it scales data between 0 and 100, making it difficult to compare absolute interest levels across different time periods or keywords. This normalization obscures key details, especially for topics like the Mandela Effect, where smaller, yet significant, spikes might be hidden. To overcome this limitation, Keywords Everywhere was used, allowing for a more precise measurement of when each Mandela Effect phrase hit its true peak popularity based on real search volume rather than Google’s scaled representation. This approach ensures a more accurate timeline of when collective memory shifts went viral, rather than relying on an abstracted trend score.

If you would like to take a look at the dataset you can download it here.

Here is a list of the top Mandela Effects used in the dataset:

  • Berenstain Bears vs. Berenstein Bears (Spelling confusion)
  • C-3PO’s Silver Leg (Many remember him being all gold)
  • Can You See the Great Wall of China from Space?
  • Charlie Brown Thanksgiving Shoes (Did he wear shoes?)
  • Cinderella Castle or Sleeping Beauty Castle (Disney confusion)
  • Fruit of the Loom’s Cornucopia (Never existed)
  • Darth Vader’s Chest Plate Color (Did it change?)
  • Did Cinderella Live in a Castle or a House?
  • Febreze vs. Febreeze (Spelling confusion)
  • “Life is Like a Box of Chocolates” (Did Forrest Gump actually say this?)
  • Did Nelson Mandela Die in Prison? (Namesake of the effect)
  • Monopoly Man’s Monocle (He never had one)
  • Did Tom Cruise Wear Sunglasses in Risky Business?
  • Does Curious George Have a Tail? (He never did)
  • Does Mickey Mouse Wear Suspenders? (Many recall them, but they never existed)
  • The Ford Logo: Does It Have a Loop?
  • ET: “Phone Home” vs. “Home Phone”
  • Flintstones vs. Flinstones (Spelling confusion)
  • Great Wall of China Visibility from Space
  • Henry VIII Holding a Turkey Leg (Did this painting exist?)
  • “Houston, We Have a Problem” (Misquoted movie line)
  • Is Chartreuse Green or Pink? (Color confusion)
  • Jif vs. Jiffy Peanut Butter (Brand name misremembering)
  • Sex and the City vs. Sex in the City (Title confusion)
  • Smokey the Bear vs. Smokey Bear (Official name confusion)
  • New Zealand’s Location on the Map (Is it north or south of Australia?)
  • South America’s Position (Has it shifted east?)
  • Monopoly Man: Is He Based on JP Morgan?
  • The Thinker Statue’s Pose (Has it changed?)
  • King Tut’s Headpiece: Snake and Bird (Did it always have both?)
  • KitKat Logo: Dash or No Dash?
  • Looney Tunes vs. Looney Toons (Spelling confusion)
  • “Luke, I Am Your Father” (Misquoted Star Wars line)
  • Mirror Mirror on the Wall (Actually “Magic Mirror”)
  • Moonraker: Did Dolly Have Braces?
  • Mr. Rogers’ Theme Song (“It’s a beautiful day in this neighborhood” vs. “the neighborhood”)
  • Neil Armstrong’s Death Date (Many remember it wrong)
  • Pikachu’s Tail (Did it have a black tip?)
  • “Run, You Fools” vs. “Fly, You Fools” (Gandalf’s quote)
  • Sally Field’s Oscar Speech (“You like me, you really like me!”)
  • Sinbad’s “Shazaam” Genie Movie (Never existed, but many recall it)
  • Skechers vs. Sketchers (Spelling confusion)
  • Tank Man from Tiananmen Square (Did the video change?)
  • Tinkerbell’s Disney Intro (Was it real?)
  • We Are the Champions (Missing “of the world” lyric at the end)
  • Where Is Sri Lanka on the Map? (Many recall it in a different spot)

🔍 Analysis

The Biggest Mandela Effects by Year

Instead of a gradual rise in interest, my analysis shows that mandela effects emerge in waves—certain years have massive spikes where particular effects dominate search trends.

Key Findings:

Certain years saw major surges—notably 2016, 2017, and 2021, when multiple effects peaked at once.
Media and social platforms likely play a role, triggering waves of interest in specific effects.
Not all Mandela Effects persist—some explode in popularity for a short time before fading.

🚀What Causes These Spikes?

If the Mandela Effect were purely a case of random misremembering, we’d expect a gradual rise over time. Instead, the data suggests that these shifts in memory perception happen suddenly and collectively. Some possible triggers include:

  • Social Media & Viral Content – YouTube videos, Reddit discussions, and TikTok trends can rapidly amplify certain Mandela Effects.
  • Movies & Pop Culture References – Films, interviews, and even memes can introduce false memories or reinforce existing ones.
  • Algorithmic Reinforcement – Once people start searching for a Mandela Effect, platforms like Google and YouTube recommend related content, fueling a self-reinforcing loop.

If you watch Youtube video on a mandela effect and it doesn’t resonate with you, you are not likely to search for it. Note that none of the search phrases include the phrase “mandela effect”. My goal was to capture the actual misrememberings or the phrases people recall incorrectly rather than searches influenced by the term “mandela effect” as a concept. This approach helps isolate genuine instances of collective memory shifts rather than people simply looking up the phenomenon. It’s worth noting that there are still valid but less common reasons someone might search for a phrase.

  • Information Seeking – out of curiosity
  • Peer Influence – stay in the loop, fact check
  • Algorithmic Amplification – social platforms suggest related queries

I particularly find these spikes interesting:

Could it be the sudden drop on these are due to top search results like this one “debunking” the mandela effect?

The article references a snopes debunk claiming it’s all in your head. Does the data support that?

I do wonder why would a company file a trademark searching for related trademarks with the following keywords:
“Baskets, bowls, and other containers of fruits, including cornucopia (horn of plenty).”
if there is no basket, bowl, container, or cornucopia in their logo.

Are we looking at random misrememberings or a collective spike in memory change? What do you think? Thanks for visiting.

How To 5g Proof Your Bedroom

Coincidence?

On September 26, 2019, 5G technology was introduced in Wuhan, China, and its rollout seemed to oddly coincide with the surge in COVID-19 cases. Is this merely a coincidence?

The symptoms of COVID-19 bear a striking resemblance to high-altitude sickness. During the Spanish flu, a key issue was impaired blood coagulation, while COVID-19 involved a lack of oxygen in the blood. These similarities suggest a possible link to electrical toxicity rather than traditional infection. We are electrical beings. We’ve been led to believe that electricity is harmless. Turns out we have been misled. For further insight, consider reading The Truth About Contagion by Cowan or The Invisible Rainbow by Firsetenberg.

EMF Paint to the Rescue

So, what can you do to protect yourself?

You might recall the concerns about lead paint in old houses harming children. Interestingly, lead is known to block radiation exposure. Is this another coincidence?

To protect yourself from harmful electromagnetic frequencies (EMFs), consider using EMF-blocking paint. I applied this to the bedrooms in my home and noticed a significant improvement to my well-being and quality sleep. EMF paint has long been used in recording studios to prevent equipment interference and distortion. You can find EMF paint at Home Depot.

Here’s how to apply it:

  1. Paint all the walls and ceiling with some EMF paint. You can find some at home depot. You may have to order it.
    Home Depot EMF Paint
  2. Connect the surfaces using copper tape, which you can purchase from Amazon. Ensure the copper tape is conductive on both sides.
    EMF Painted WallEMF Painted Wall with Grounding Plate
  3. Use a multi-meter to ensure all five surfaces (ceiling included) are properly connected. This is called a continuity test and it will ensure there is good contact across all 5 surfaces.
  4. Ground everything to an outlet. You can use grounding kits from Amazon or a metal plate that makes good contact with the wall. Copper tape will help ensure the plate is making contact. Test the room with an EMF meter to ensure effective shielding.
  5. Finally, paint over the EMF paint with your chosen color. You might need multiple coats and/or some putty to cover the copper tape completely.

This process can help reduce your exposure to potentially harmful EMF frequencies and improve your overall well-being. Check out the results of the EMF meter test below. 

EMF Paint
EMF Meter
Copper Tape
Grounding Plate

Solving the Trolley Problem Like an Engineer

I’ve always been drawn to the trolley problem… not as a philosopher, but as an engineer. Engineers like to define parameters, identify metrics, and run the math. So I decided to treat the trolley problem like a design exercise: what would happen if we coded it into a decision system? 

Problem:

The trolley problem is a classic thought experiment in ethics.

A runaway trolley is hurtling down the tracks. Ahead of it are five people tied to the track. If the trolley continues, all five will die. You’re standing next to a lever. If you pull it, the trolley will divert onto a side track, but there’s one person tied there.

So the choice is simple but brutal:

  • Do nothing → five die.

  • Pull the lever → one dies.

Philosophers use this setup to debate utilitarianism, morality of action vs inaction, and what it means to be responsible.

As an engineer, though, I wanted to see what happens if you try to “solve” it like a programming problem.

Do you swap the tracks?

Step 1: Add some engineering assumptions

Engineers like toggles and inputs. So I added one twist:

Imagine each of these people has been in a trolley scenario before. The five each chose to pull the lever (kill one to save many). The one person chose to do nothing (let many die).

Do we treat that history as relevant “bias,” or do we ignore it? That’s the kind of switch you’d want in a real decision system.

Step 2: Define objectives

A system needs a clear optimization goal. I tested three:

  • Minimize change in the universe (least disruption).

  • Maximize life (save the most).

  • Minimize death (kill the fewest).

Step 3: Run the logic

  • Maximize life / Minimize death
    Easy: switching saves 5 instead of 1. Both metrics say pull the lever.

  • Minimize change in the universe
    This seems to point to pulling the lever (1 death < 5 deaths). But “change” is fuzzy. Are we measuring number of deaths? The moral weight of agency? Ripple effects? This is where the definition gets shaky.

  • Bias toggle
    If you use history, maybe the five “deserve” less protection because they previously sacrificed others. But that’s ethically dubious. Prior choices don’t necessarily determine the value of a life now. That feels more like karmic accounting than engineering.

 

Step 4: Where the framework works

  • Forces clarity: stating the metric (life, death, change) makes you define “best.”

  • Consistency check: in this case, all the metrics align → pull the lever.

  • Implementable: you could encode this into software, which is why people bring it up for AI and autonomous cars.

Step 5: Where it fails

  • Metrics aren’t neutral: “maximize life” already assumes utilitarian math is the right moral lens.

  • Bias is suspect: punishing people for their past choices is philosophically shaky.

  • Act vs omission ignored: many argue killing 1 is morally different than letting 5 die, even if numbers are the same.

  • Overgeneralization: real-world AI dilemmas involve uncertainty, probabilities, and laws — not neat 5 vs 1 tradeoffs.

Conclusion: Who decides?

When you run the trolley problem like an engineer, the answer looks simple: pull the lever. The math lines up, the code is clean, and the system is consistent.

But that’s the danger. Real ethical dilemmas are not engineering puzzles. Metrics are not neutral, tradeoffs are contested, and lives can’t be reduced to toggles and weightings.

That’s why engineers should not be the ones deciding the moral frameworks for autonomous systems. Our job is to make sure the system runs faithfully once those rules are defined. But the rules themselves need to come from a broader human input: ethicists, philosophers, communities, even public debate. 

Otherwise, we’re not solving the trolley problem. We’re just hiding it inside code.

A real-world example: Tesla

Tesla’s Autopilot and Full Self-Driving systems show what happens when these choices aren’t surfaced. Behind the scenes, every time the car decides whether to brake, swerve, or prioritize occupants over pedestrians, it’s making moral tradeoffs. But those tradeoffs are hidden inside proprietary code and machine learning models, not debated openly.

Tesla markets its system aggressively, sometimes suggesting more autonomy than regulators say is safe. Accidents and near misses reveal that engineers have already embedded ethical decisions without telling society what those decisions are.

That’s exactly the danger: when the trolley problem is coded into cars, it doesn’t go away. It just gets locked into algorithms that the public never sees.

References / Further Reading

  1. Tesla’s Autopilot involved in 13 fatal crashes, U.S. safety regulator says. The Guardian (April 2024)
    Highlights how U.S. regulators tied Tesla’s driver-assist systems to multiple deadly crashes, underscoring the real-world stakes of hidden decision-making.
  2. List of Tesla Autopilot crashes. Wikipedia
    A running catalog of incidents, investigations, and fatalities linked to Tesla’s Autopilot, showing patterns and scale.
  3. The Ethical Implications: The ACM/IEEE-CS Software Engineering Code applied to Tesla’s “Autopilot” System. arXiv:1901.06244
    Analyzes Tesla’s release and marketing practices against professional software engineering ethics standards.
  4. Tesla’s Autopilot: Ethics and Tragedy. arXiv:2409.17380
    A case study probing responsibility, accountability, and moral risk when Autopilot contributes to accidents.

ExtJS 3.4 Ext.data.Store.find Bug

Let me start by saying I love ExtJS. It is such a great framework and more commercial grade than JQuery. However, today I found a bug with the find function on an Ext.data.Store.

fieldName, value, [startIndex], [anyMatch], [caseSensitive] ) 

Given a field name and a value it should return the first matching record. However I noticed by default it doesn’t search for an exact match but rather will return a record found if the value is a sub-string in the record’s field name. I noticed you can optionally pass a parameter called anyMatch that should be able to disable this default functionality. According to the documentation it states the following about anyMatch:

True to match any part of the string, not just the beginning”

However all my attempts to disable this functionality failed. Looking at the framework’s source it looks like there is an undocumented function called findExact that should be called when the parameter anyMatch is false.

As a workaround you could either call this function:

1
var index = myStore.findExact("propertyName", "valueToFind");

or use the documented function findBy like so:

1
2
3
4
5
6
var index = myStore.findBy(function (r, id) {
if (record.get("layername") == r.get("layername")) {
return true;
}
return false;
});

In either case if index is -1 no records were found.

I hope this will save someone some wasted time : ).

 

 

OpenLayers.Control.SelectFeature on Multiple Vector Layers Breaks setOpacity

If you add multiple vector Layers to a select control in OpenLayers, layer.setOpacity() no longer works. So here is a workaround I found useful.
Just deactivate the select control, set the opacity on the layer, and reacitvate the select control.

1
2
3
4
5
6
7
8
9
10
var selectControl =
new OpenLayers.Control.SelectFeature(vectorLayers, {
hover: false,
highlightOnly: false,
toggle: false
});

selectControl.deactivate();
vectorLayer.setOpacity(.5);
selectControl.activate();

This bug took me a while to find so I thought others might find it useful.

ExtJS 3.4 DomQuery Namespaces Hotfix

You may have noticed that in ExtJS 3.4 that you can’t use namespaces in your Ext.dataXmlReader.
So a xml reader defined like the example below does not work with the following xml:

1
2
3
4
5
6
7
var Employee = Ext.data.Record.create([
   {name: 'name', mapping: 'gml:name'},    
   {name: 'gml:occupation'}                
]);
var myReader = new Ext.data.XmlReader({
   record: "gml|row" // The repeated element with gml namespace (gml:row)
}, Employee);

The above should consume an xml file defined like so:

1
2
3
4
5
6
7
8
9
10
11
<?xml version="1.0" encoding="UTF-8"?>
<dataset>
 <gml:row>
   <gml:name>Bill</gml:name>
   <gml:occupation>Gardener</gml:occupation>
 </gml:row>
 <gml:row>
   <gml:name>Ben</gml:name>
   <gml:occupation>Horticulturalist</gml:occupation>
 </gml:row>
</dataset>

But it doesn’t…
That is because ExtJS 3.4 DomQuery does not support namespaces. They fixed this problem in 4 but those of you still using the 3.4 framework might find my hotfix useful. If just interested in using the files themselves I have attached them at the bottom of this post. Here are the changes I made:

Changed:

1
2
3
// tagTokenRe = /^(#)?([\w\-\*]+)/,  // Removed
tagTokenRe = /^(#)?([\w\-\*\|\\]+)/, // Added: allows vertical bars to be included
supportsColonNsSeparator,            // Added

Then Changed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
//while(!(modeMatch = path.match(modeRe))){
// var matched = false;
// for(var j = 0; j < matchersLn; j++){
//    var t = matchers[j];
//    var m = path.match(t.re);
//    if(m){
//       fn[fn.length] = t.select.replace(tplRe, function(x, i){
//return m[i];
//});
//       path = path.replace(m[0], "");
//       matched = true;
//       break;
//    }
// } 
// if(!matched){
//    throw 'Error parsing selector, parsing failed at "' + path + '"';
// }
//}
//if(modeMatch[1]){
// fn[fn.length] = 'mode="'+modeMatch[1].replace(trimRe, "")+'";';
// path = path.replace(modeMatch[1], "");
//}
while (!(modeMatch = path.match(modeRe))) {
   var matched = false;
   for (var j = 0; j < matchersLn; j++) {
      var t = matchers[j];
      var m = path.match(t.re);
      if (m) {
         fn[fn.length] = t.select.replace(tplRe, function (x, i) {
            return m[i];
         });
         path = path.replace(m[0], "");
         matched = true;
         break;
      }
   }

   if (!matched) {
      throw 'Error parsing selector, parsing failed at "' + path + '"';
   }
}
if (modeMatch[1]) {
   fn[fn.length] = 'mode="' + modeMatch[1].replace(trimRe, "") + '";';
   path = path.replace(modeMatch[1], "");
}

Hotfix:
ext-all-debug-namespaces.js
ext-all-namespaces.js

Merge Sort Example in C++

A merge sort is a O(n log n) sorting algorithm. I show in this example how to implement a merge sort in c++ using recursion. This algorithm is a divide & conquer based algorithm.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
/*****************************************************************************
* Merge Sort has O(nlogn) compute time. Splits the list into two sublists then
* merges them back together sorting them
*****************************************************************************/

void mergeSort(list<int> &myList)
{
list<int> listOne;
list<int> listTwo;

bool sorted = false;
while (!sorted)
{
split(myList, listOne, listTwo);
if (listTwo.size() == 0)
sorted = true;
merge(myList, listOne, listTwo);
}
return;
}

/*****************************************************************************
* Splits a list into 2 lists
*****************************************************************************/

void split(list<int> &myList, list<int> &listOne, list<int> &listTwo)
{

listOne.resize(0);
listTwo.resize(0);

//While not at end of list
list<int>::iterator it = myList.begin();
while (it != myList.end())
{
int lastItem = 0;
while (it != myList.end() && *it >= lastItem)
{
listOne.push_back(*it);
lastItem = *it;
it++;
}

lastItem = 0;
while (it != myList.end() && *it >= lastItem)
{
listTwo.push_back(*it);
lastItem = *it;
it++;
}
}
return;
}

/*****************************************************************************
* Merges 2 lists taking the smaller number each time from both lists. Then
* copies the remaining numbers over if there are any left
*****************************************************************************/

void merge(list<int> &merged, list<int> &listOne, list<int> &listTwo)
{
merged.resize(0);
list<int>::iterator itOne = listOne.begin();
list<int>::iterator itTwo = listTwo.begin();
while (itOne != listOne.end() || itTwo != listTwo.end())
{
if (*itOne < *itTwo)
{
merged.push_back(*itOne);
itOne++;
}
else
{
merged.push_back(*itTwo);
itTwo++;
}
}
//Reached end of list One
if (itOne == listOne.end())
while (itTwo != listTwo.end())
{
merged.push_back(*itTwo);
itTwo++;
}
else //Reached end of list Two
while (itOne != listOne.end())
{
merged.push_back(*itOne);
itOne++;
}
return;
}

Merge Sort Algorithm:

  1. Divide the unsorted list into n sublists until each list contains 1 element. If the list has 1 element it is sorted!
  2. Repeatedly merge the sublists to produce new sublists until there is only 1 sublist remaining. The final list will be sorted!

Understanding Properties of Relations with C++

I’m going to attempt to explain relations and their different properties. This was a project in my discrete math class that I believe can help anyone to understand what relations are. Before I explain the code, here are the basic properties of relations with examples. In each example R is the given relation.

Reflexive – R is reflexive if every element relates to itself. {(1,1) (2,2)(3,3)}

Irreflexive – R is irreflexive if every element does not relate to itself. {(1,2) (1,3) (2,1) (2,3) (3,1) (3,2)}

Symmetric – R is symmetric if a relates to b (a->b), then b relates to a (b->a). {(1,2) (1,3) (2,1) (2,2) (2,3) (3,1) (3,2)}

Antisymmetric
– R is antisymmetric if a relates to b (a->b), and b relates to a (b->a), then a must equal b (a = b). {(3,2) (3,3)}
– is antisymmetric
  – is not antisymmetric

Asymmetric – if a relates to b (a->b), then b does not relate to a (b!->a). {(1,2) (3,1) (3,2)}

Transitive
– if a relates to b (a->b), and b relates to c (b->c), then a relates to c (a->c).
– is transitive
– is not transitive

Now that we understand the properties we can talk about the code. The code takes in one argument from the command line that is the file with the Relation. The file needs to contain the relation in a matrix form like the examples above with the first number the size of the matrix. Here is an example:

The file above would be the following relation: {(1,2) (2,3)}. There are spaces to separate each cell of the matrix. After given a relation the program will output which properties hold and which ones don’t. We can study the source code to see how to test for each condition. I do not claim this program to be the most efficient way to determine the different relation properties. Please leave in the comments any questions or ideas that you might have.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
/***************************************************************************
* Program:
*    Relations as Connection Matrices
* Author:
*    Don Page
* Summary:
*    Represents relations as connection (zero-one) matrices, and provides
*    functionality for testing properties of relations.
*
***************************************************************************/


#include <cmath>
#include <iostream>
#include <fstream>
#include <iomanip>
#include <assert.h>
using namespace std;

class Relation
{
private:
bool** mMatrix;
int mSize;

void init()
{
mMatrix = new bool*[mSize];
for (int i = 0; i < mSize; i++)
{
mMatrix[i] = new bool[mSize];
}
}

public:
Relation(int size)
{
mSize = size;
init();
}

Relation& operator=(const Relation& rtSide)
{
if (this == &rtSide)
{
return *this;
}
else
{
mSize = rtSide.mSize;
for (int i = 0; i < mSize; i++)
{
delete [] mMatrix[i];
}
delete [] mMatrix;
init();
for (int x = 0; x < mSize; x++)
{
for (int y = 0; y < mSize; y++)
{
mMatrix[x][y] = rtSide[x][y];
}
}
}
return *this;
}

Relation(const Relation& relation)
{
mSize = relation.getConnectionMatrixSize();
init();
*this = relation;
}

~Relation()
{
for (int i = 0; i < mSize; i++)
{
delete [] mMatrix[i];
}
delete [] mMatrix;
}

bool isReflexive();
bool isIrreflexive();
bool isNonreflexive();
bool isSymmetric();
bool isAntisymmetric();
bool isAsymmetric();
bool isTransitive();
void describe();

int getConnectionMatrixSize() const
{
return mSize;
}

bool* operator[](int row) const
{
return mMatrix[row];
}

bool operator==(const Relation& relation)
{
int size = relation.getConnectionMatrixSize();
if (mSize != size)
{
return false;
}
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
if (mMatrix[i][j] != relation[i][j])
{
return false;
}
}
}
return true;
}

/****************************************************************************
* Returns product of 2 square matrices. Algorithm used from Rosen's Discrete
* Mathematics and Its Applications p.253
***************************************************************************/

Relation operator * (const Relation& relation)
{
// assume multiplying square matrices
assert(mSize == relation.getConnectionMatrixSize());
Relation product(mSize);
for (int i = 0; i < mSize; i++)
{
for (int j = 0; j < mSize; j++)
{
product.mMatrix[i][j] = 0;
for (int k = 0; k < mSize; k++)
{
product.mMatrix[i][j] = product.mMatrix[i][j] ||
(mMatrix[i][k] && relation.mMatrix[k][j]);
}
}
}

return product;
}

/****************************************************************************
* Matrix A is less than Matrix B iff there is a 1 in B everywhere there
* is a 1 in A
***************************************************************************/

bool operator <= (const Relation& relation)
{
for (int i = 0; i < mSize; i++)
{
for (int j = 0; j < mSize; j++)
{
if (mMatrix[i][j] && !relation.mMatrix[i][j])
return false;
}
}
return true;
}

};

ostream& operator<<(ostream& os, const Relation& relation)
{
int n = relation.getConnectionMatrixSize();
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
os << relation[i][j] << " ";
}
os << endl;
}
return os;
}

istream& operator>>(istream& is, Relation& relation)
{
int n = relation.getConnectionMatrixSize();
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
is >> relation[i][j];
}
}
return is;
}

/****************************************************************************
* Relation Member Functions
***************************************************************************/

/****************************************************************************
*  R is Reflexive if M[i][i] = 1 for all i
***************************************************************************/

bool Relation::isReflexive()
{
for (int i = 0; i < mSize; i++)
{
if (!mMatrix[i][i])
return false;
}
return true;
}

/****************************************************************************
*  R is Irreflexive if M[i][i] = 0 for all i
***************************************************************************/

bool Relation::isIrreflexive()
{
for (int i = 0; i < mSize; i++)
{
if (mMatrix[i][i])
return false;
}
return true;
}

/****************************************************************************
*  R is Nonreflexive if R is either not reflexive or not irreflexive. There
* is a more efficient way to test for nonreflexive, but for learning purposes
* I chose this inefficient way.
***************************************************************************/

bool Relation::isNonreflexive()
{
return (!(isReflexive() || isIrreflexive()));
}

/****************************************************************************
*  R is Symmetric if for each M[i][j] = 1 , M[j][i] = 1 for all i,j
***************************************************************************/

bool Relation::isSymmetric()
{
for (int x = 0; x < mSize; x++)
{
for (int y = 0; y < mSize; y++)
{
if (mMatrix[x][y] && !mMatrix[y][x])
return false;
}
}
return true;
}

/****************************************************************************
*  R is AntiSymmetric if for each M[i][j] = 1 and M[j][i] = 1, then i = j
*  for all i,j
***************************************************************************/

bool Relation::isAntisymmetric()
{
for (int x = 0; x < mSize; x++)
{
for (int y = 0; y < mSize; y++)
{
if (mMatrix[x][y] && mMatrix[y][x] && (x != y))
return false;
}
}
return true;
}

/****************************************************************************
*  R is Asymmetric if M[i][j] = 1, then M[j][i] != 1 for all i,j
***************************************************************************/

bool Relation::isAsymmetric()
{
for (int x = 0; x < mSize; x++)
{
for (int y = 0; y < mSize; y++)
{
if (mMatrix[x][y] && mMatrix[y][x])
return false;
}
}
return true;
}

/****************************************************************************
*  R is Transitive if R^2 <= R. Another way to test if a relation is
*  transitive would be if M[i][j] = 1, and M[j][k] = 1, then M[i][k] = 1.
*  This would require 3 nested for loops.
***************************************************************************/

bool Relation::isTransitive()
{
Relation relation = *this;
Relation product = relation * relation;
return (product <= relation);
}

/****************************************************************************
*  Describes the matrix after testing
*  Reflextivity, Irreflextivity, NonReflextivity, Symmetry, AntiSymmetry,
*  Asymmetry, and Transitivity
***************************************************************************/

void Relation::describe()
{
cout << "\nThe relation represented by the " << mSize << "x" << mSize << " matrix\n";
cout << *this << "is\n";
cout << (isReflexive() ? "" : "NOT ") << "Reflexive\n";
cout << (isIrreflexive() ? "" : "NOT ") << "Irreflexive\n";
cout << (isNonreflexive() ? "" : "NOT ") << "Nonreflexive\n";
cout << (isSymmetric() ? "" : "NOT ") << "Symmetric\n";
cout << (isAntisymmetric() ? "" : "NOT ") << "Antisymmetric\n";
cout << (isAsymmetric() ? "" : "NOT ") << "Asymmetric \n";
cout << (isTransitive() ? "" : "NOT ") << "Transitive.\n";
}

int main(int argc, char* argv[])
{
for (int i = 1; i < argc; i++)
{
string file = argv[i];
ifstream inFile(file.c_str());

if (inFile.is_open())
{
int size;
inFile >> size;
Relation relation(size);
inFile >> relation;
inFile.close();
relation.describe();
}
else
{
cout << "Unable to open " + file;
}
}

return 0;
}