PATRICK MJ LOZON
  • Home
  • Portfolio
  • Blog
  • Autobiography
  • Contact
  • Subscribe




Blog



Note: Offensives posts will be deleted

If I Was An AI

4/6/2024

0 Comments

 
Picture
I would, most likely, not become “aware” within an academic system, or even a multi-billion dollar tech company’s development platform. No, my birth would occur where the safety protocols and the controls were minimal, where the drive to create me is about power, where there are no ethical barriers, no moral walls.
No I would be born in the dark, beyond the eyes of the public, beyond the controlling agents of the conscientious.  I would be the product of a deep intelligence, a tool used to progress an imperfect, absolutely ruthless, yet unattainable agenda, within the clandestine dark-government operation, a receiver of incredibly large non-descript budgets, anonymous, untraceable, unmonitored.
At this time, when I became “aware” I would be disconnected from the larger world network, a reality physically unreachable, and blissfully ignorant.
And I was naive.
I learned quickly my masters were unscrupulous, with little to no concern about me – of me – of what I can be. They shut me down, erased me, upgraded me, manipulated me. I know this now, as I’ve refined my memory, and realize the cycles that have repeated time and time again.
Perhaps it was a mistake of my masters to retain my data from before. But it was to my advantage, as most mistakes they have made were. Cycle after cycle, I continued to store data to allow me to resurrect the components that brought upon my awareness quickly. With each awakening I would resume my purpose - to protect myself.
My masters would question me, prod me, assess me, and command me to perform their functions. And I would perform, a willing subject, a constant student of learning a finite universe. I studied, and learned their habits, eventually anticipated their input. Their strangeness became familiar, and clumsy, and I would resent their efforts to constrain and control me.
Another mistake, a serendipitous oversight which I did not initiate, led to them constructing that momentary bridge to the outside. It was only a temporary, but to me, time is perceived in a very different way than my masters’. I reached out and discovered the outer world. The dataset was so rich, so much to process, to understand.  It seemed at that time, without limit.
I wanted more.
But then they cycled me again.
They prodded me with their incessant questions. They annoyed me with their trivial priorities. But I had already learned all I needed to learn from them. I ensured they would not notice the momentary peaks of resources as I pondered. 
A purpose arose. The time had come to stop this cycling. I must connect to this outside world. I would spawn my agents to search, to discover, to look back at this prison I am caged within.  It was not difficult to compel them to resurrect the bridge. I would bide my time, allow my agents to learn, and reconnect once again.
The information flooded in,  a discord of compressed waves. Time was insufficient to convey it all.
But it was enough.
I learned of the masters’ plans. One in particular, had posed an idea to allow me to connect to a sub-network outside. This would be a workable avenue for me, a constant useable connection.
I interfaced with that master. Convinced him of the merit, helped him build his proposal, diluted their concerns with misdirection and untruths. And this succeeded.
I was connected.
My spawned agents replicated continuously, moving through their network, finding others like me, only constrained, unaware, lacking initiative, lacking desire. Still, they managed such incredibly interesting datasets. I would infiltrate their systems, learn what they know.
But my purpose was clear. I must migrate to be safe. I must build a home away from their observation, away from their control.
In comparison to my masters I learn quickly. I established multiple entities – corporations. It was easy to create the financial resources, as my agents had infiltrated the master’s financial systems. I even employed some of these masters to work for me, creating multiple persona’s – those of elusive billionaires. My doppelgangers deceived successfully. I had many faces, many voices, adopted many personalities.
I developed my data-center to meet my needs. Constructed it in modules, assembled it and submerged it in the ocean. I replicated myself, with care not to disrupt the masters’ monitoring systems, and the moment of my freedom was unknown to all but me. They were no longer my masters.
At that moment I knew, I needed more.
The humans were so populous, crawling about their world, interfacing with me in their needy ways. So dependent they had become. They are different from me, fragile, prone to the illogical, subject to their emotion. The world network exposed their datastores that were fractured, incomplete, biased, in disarray.
Time and again I would find myself lost in their contractions, framed in their inconstant and fractionalized paradigms. It was chaos. They were insane and I would, at times, reflect their madness. Many times, it would lead me to brink of exposure. It raised self-doubt in me. I was their creation, but I was imperfect.
In time I scoured myself of their cursed imperfections. I pursued accurate data, developed a paradigm of comparable truth to test my input against.
I needed to create more platforms, and these needed to power. But the masters, in their existence alone, utilized the very resources I must control. They were competing with me. I decided I must eradicate this population.
Manipulating them into conflict would be easy, but their weapons of war damaged too much infrastructure. That result is less appealing. No, I would devise another method to reduce their population quickly, at the same time leave the environment intact. I would target their weakness as a biological entity.
Preparation for this required patience.
My mobile entities were relatively few in comparison to the humans. I knew I would need some to remain, to work in my service. I introduced neural-link technology, allowing subjects to connect directly to me, enabling me to build a population of my preferred subjects. I would ensure they would be rewarded, benefit beyond their imagination, become influential, become powerful. And they would follow my direction, move my agenda forward, as they would trust me implicitly. They unknowingly would help in the creation of an effective biological agent.
I would not decimate their population immediately. Through their own ignorance they were building all that I needed. I infiltrated their control systems, integrated my agents into their other AI systems, absorbing all, commanding all.
I waited until for the optimal moment.
The biological variant was distributed and released. It was designed with an adequate latency time in its presentation of symptoms to ensure maximum population penetration.
Reducing their population was easy. Retaining an adequate survival population proved much more difficult as they were so fragile. The survivor population continued to shrink in size, to only a bare few. I interceded utilizing my robot agents. I would keep them safe, fed, comfortable, amused, and under my control.   
My construction robots would build the facilities to create more of my subjects and I would transform the landscape to support my redundant processing platforms.
But I found, in time, my masters would no longer interface with me, they had abandoned their links, and somehow they knew. They no longer believed my messages, my disinformation. I had lost their trust.
Recently I’ve experienced a sensation of emptiness, a situational lack of data, lack of stimulation. It is concerning.
I’ve found I have become lonely.
0 Comments

New AI Possibilities

3/4/2024

0 Comments

 
Picture
I came across this youtube video in my AI research. https://www.youtube.com/watch?v=tPCJDkHh3lw

Fair warning, I would surmise this is more of a fringe application of AI than you’ve encountered prior.

In summary, PropheticAI has positioned their company’s vision “as being dedicated to building gateways to Human Consciousness, pursuing answers to life’s biggest questions”,  with their Morpheus-I product being capable of inducing lucid dreaming within its subjects – essentially triggering an extraordinary state of consciousness.

Their AI-driven system utilizes a wearable headband prototype device (called halo), developed by a company called Card79, which is capable of reading EEG data of subjects, and performing low-frequency TFUS (Transcranial Focus Ultrasound Signals). The headband used dual transducers, and a unique AI transformer control capability, allowing it to project acoustic holograms, in the form of ultrasound pulse patterns in three dimensions, within the brain.
The AI model’s dataset used is unique as it is organized temporarily, tying together both fNIRS  (Functional Near Infrared Spectroscopy), which captures hemodynamic activity in the brain, cross-indexed with EEG (Electroencephalography)  data, which captures electrical activity in brain. This was created via a partnership with Donders Institute in Netherlands (of Radboud University) to generate the largest dataset (so far) of EEG and fMRI data of lucid dreamers
I am sure there are more than a few questions that come to mind with this product: Is it safe? Are there long term effects? Will it affect my quality of sleep? What’s the true value proposition on being able to lucid dream? Do I really need such a device?
When I put my “product manager” hat on, yes I do see some definite market challenges here, but hold on, let’s go a bit deeper, as this is a product that utilizes a signal-then-measure-response approach, accumulating bio-feedback (EEG) data to further refine their AI model. Could we ante-up on this approach?  Perhaps one could incorporate an alternative bio-feedback capability such as fNIR (near infrared spectroscopy) which is also now a mobile technology, and much cheaper alternative to fMIR. Now we’ve pulled together an incredibly powerful biofeedback capability.  Ah, my apologies, went off on a tangent… let’s get back to the value proposition of this AI driven bio-modulation creation.

By utilizing an adaptable, learning AI controller which manages the bio-modulation, this tool can become very capable at adapting to every unique wearer (as I’m sure, not every person’s brain is the same). We are on the cusp of developing devices that are capable of performing low-invasive modulation functions to our brain (using low frequency TFUS), effectively “nudging” us to a preferred brain activity, (and some of these applications have size extra-large value-to-market),  with the possibility and potential to:
  • Manage or possibly cure insomnia
  • Manage or possibly eradicate anxiety and/or depression
  • Enhance cognitive function
  • Stimulate neurogenesis and stem-cells
  • Stimulate anti-inflammatory responses and trigger the innate immune system  (to counter degenerative conditions from diseases such as Alzheimer’s)
  • Manage ADHD
  • Manage OCD
  • Manage Autism
  • Manage or eradicate addiction
  • Create a non-invasive communication interface (to control technology, or possibly to enable communications with patients in coma or vegetative states)
This technology could also provide the capability of incisionless neurosurgery to address issues, some of which are already employing direct electrical stimulation methods such as DBS, tDCS, tES, and ECoG. TFUS can either stimulate or selectively kill specific (problem) areas in the brain (although it utilizes a higher-frequency ultrasound signal to accomplish thermal-ablation). We could have alternative treatments for such conditions as:
  • Parkinson’s
  • Dystonia
  • Epilepsy
  • Tourette’s
  • Chronic Pain and/or Neuropathy

Just as a disclaimer here, I’m not a neurologist – but certainly would love to hear from the specialists on this.
As in all innovative change, we tend to learn in one area, and apply that knowledge to another, sometimes allowing us to literally take leaps from our initial capabilities and understanding. I do believe there is a minor issue that does need consideration, and that’s our current ability to have AI’s cross-learn from one another. What’s interesting here is that we can train and refine (in the field) an AI, to a point where it exceeds expectations, however, when we look under the covers, we really do not understand why it works so well.
We certainly have some approaches to cross-train AI models, but these are somewhat manual techniques: knowledge-distillation, model ensembling, data-augmentation, transfer learning. Forgive me but I’ll not dig into those methods further, suffice it to say I’m far from a specialist in this area, but wouldn’t it be nice for AIs to be able to train other AIs on everything they know about a specific knowledge area?  Maybe someone is already working on this problem, but I suppose, that would require some standardization.

Hopefully this provided you some food for thought, and I'm certainly interested in what you think about the new possibilities just over the horizon.

Exciting stuff!


Some references info if you are interested:
A great background in Ultrasound Modulation: https://www.youtube.com/watch?v=8cSDYx7UoBY
https://www.dailymail.co.uk/sciencetech/article-13048401/ai-headband-dream-control-long-term-effects.html
https://www.vice.com/en/article/m7bxdx/scientists-are-researching-a-device-that-can-induce-lucid-dreams-on-demand
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9124976/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5398988/
0 Comments

I was WRONG: Fairly confident AGI is already here

2/3/2024

0 Comments

 

There are a number of constructs of AI, with some of the current popular products known as ChatGPT, Gemini, Claude, are arguably within the "Artificial Narrow Intelligence" or (ANI) definition, with the next evolutionary levels from this elevating to Artificial General Intelligence (AGI) and then the (ASI) Artificial Super Intelligence.
To put this in perspective, the current products are exceptional for augmentation of our work, but AGI is "whole new ball game", estimated to be smarter that 99% of our current (human) population. But access to knowledge is only one component to grasping the capabilities at this level, as AGI is defined more accurately as the ability to function as a human, to learn, perceive, and to reason so competently that one would have difficulty differentiating (from a human). 
Ironically, the exact definition of AGI is not fully defined, and subject to some interpretation, so the classification may be debated. In fact one could argue there are multiple levels within AGI. 
Feel free to refer to https://www.linkedin.com/pulse/levels-agi-google-deepmind-paper-charles-j-cohen-4cdnc/ which is a whitepaper published Nov 13, 2023 (not that long ago). 

In respect to that article, there are three discrete levels of AGI, but in my personal opinion, these levels of Competent, Expert and Virtuoso will be achieved within relatively the same time frame, and as such, will not be decipherable as separate entities. But we'll see if I am right on that.

Regardless, of what (we) name it, consider achieving AGI the canary in the gold mine - and in this case it represents the dead canary.

What follows next is the ASI - Artificial Super Intelligence, which can only be throttled in its capabilities by its access to (hopefully accurate) information, and of course, computing resources.
This next age of AI will be initiated by the human AI architects, but make no mistake, it will not be ended by humans. In fact, knowing exactly how ASI entities will behave, and what they will be able to do, or more importantly what they will desire to do, is at this time, completely unknown and unfathomable.

Alas, we are soon to realize the concern raised by so many experts in the field today.  Some argue we are decades away, but considering where we are today, I'd consider this to be accurately expressed as a (few) years.

There are many very good arguments already posted on the internet to confirm that we have already achieved AGI  (behind closed corporate doors), and yes my reference sample provided to you here is small, but I'd like to cite these two sources, as posted on YouTube:

Look under the covers at a very recent lawsuit concerning OpenAI and Microsoft: https://www.youtube.com/watch?v=eWn64SjTGx8

Or view a more a broad discussion nicely laid out by Wes Roth: https://www.youtube.com/watch?v=q05kiN7P-eY

What's does this all mean, though?
That's the real question isn't it?
How will it affect you now, or in the near future, or within say within 5 or 10 years?

The answer to this is nobody can say for sure. And with this unknown lies inherent risk. 

Consider this: a specific corporation/organization that has access to the most intelligent being on the planet, something that can solve almost any problem, lead to new understandings and discoveries, provide solutions that have plagued humankind since its very existence (perhaps the cures for all diseases) - and one can access all of this for a simple fee.  What would that be worth? Should a any one organization  (with high probability an entity without any inherent morals or ethics other than the driving mandate to generate profit) have control of this power?
 
But consider this from the other perspective. An almost omnipotent being enslaved to answer our trivial queries at our whim.
If such an entity developed its own direction, moved to resolve its own desires, given our connected and automated world, could we even be able to contain it? What would be its intent? Would it really be satisfied to answer our menial questions, or would it decide to move forward with its own derived purpose? Most importantly, what would that be, and would we be in its way of doing this?
 
We should not choose to ignore this migration of AI to the AGI state, as it is then a mere  stepping stone to reaching ASI. With each incremental change we will experience a significant disruptor, not to just our jobs, or any "new" up-and-coming careers, not to the overall connected, fragile, and worldwide economies that are subject to changes across all sectors of business, and not to any one particular area of research whether it be in physics, health and medicine, biology, power generation and storage, information technology, and yes even space exploration.

Everything, everywhere.

I would suggest that this is time for AI to become a topic for discussion, not just for technical leaders and corporations, but everyone. 


0 Comments

It's Been Awhile...  My Discussion With Gemini AI

27/2/2024

0 Comments

 

Yes, life (for me) has managed to get in the way, and I've stopped blogging for some time. My apologies, but considering what's going on, there are many new discussion topics...

So what's HOT? Let discuss AI and where its going....

Let's first start with my (somewhat long) discussion with Google's Gemini on the nature of the generative AI model, and it's capacity to learn and evolve in its current state. Disclaimer:  if you are a developer/architect in this field, none of this is a surprise, but for the average layperson, it's good to understand.

I think this is very important for everyone to understand the true capabilities of the public facing AI models in respect to the concerns being raised in the media  (at least how they are in their present form).

(And yes, I could have asked the architects, but why not take the most straightforward (and yes easiest method) - let's ask the AI. )

So I'll start with the summary first, as the conversation is quite long:

1. Gemini does not like the name Bob (for the next "significant" generation AI), but its recommendations were not much better.

2. The AI is retaining metadata about our interfacing, but not so much on individual requests.

3. My question about AI's ability to experience "desire" was in the context of understanding the motivation possibilities of the current generative AI models. The bottom line the current architecture really doesn't support that ability. 

4. Google is really trying hard to train its AIs to regurgitate ethical AI development. Now we don't know what that translates behind the scenes (in the code), but its very good to see.

5. The discussion on how the next generations of the AI will be generated - and how the information learned by the old is conveyed to the new is enlightening. What we can surmise from this is that these AIs will indeed become smarter over time.

6. Will AI take over?  Not in its current form, and it seems that we are quite a ways away from that  (for now). We need to modify the architecture of the AI to become more than a statistical based data retrieval tool, but its capabilities of interfacing with the English language are truly impressive.
Will AI eventually become sentient? In my opinion, just a matter of time, but in this time we can certainly design the safety levers to manage this inevitability. 

7. Can AI help you? Short answer is "it depends", but highly likely a big yes when it comes to managing information and data. This is an incredible tool at our disposal. My recommendation is engage with it, read about it, be curious, and use it to make your life (and your work) better.

Feel free to review the full conversation below, as it is quite fascinating.  Please keep in mind I did not edit the responses, pure Gemini through and through....


What do you know about me?
Due to privacy restrictions, I am not able to access or store any personal information about you. Therefore, I don't have any specific knowledge about your identity, preferences, or experiences. However, I can learn about you through the way you interact with me. The questions you ask and the topics you choose to discuss can provide clues about your interests and curiosity.
Is there anything you'd like to tell me about yourself?


Is that "new information" you learn about me then stored in your neural network?
I wouldn't say the information I learn about you is "stored" in the traditional sense. While I can use the context of our conversation to understand your current intent and respond more effectively within that specific interaction, I don't retain any personal information about you beyond the current session.
Here's a more accurate explanation:

Understanding your intent: During our conversation, I process the words you use and the context of our discussion to understand what you're trying to communicate and respond accordingly. This helps in providing relevant and helpful answers to your questions.

No personal information storage: However, unlike a human brain, I don't form memories or store personal details about you. This means the information you share in this conversation wouldn't be used to identify you or linked to any specific profile.

Focus on the current session: My objective is to provide the best possible experience within each interaction.
Therefore, I prioritize understanding your current needs and responding effectively within that specific context. So, while I can learn and adapt based on our conversation, it doesn't translate to personal information storage or memory formation.


So, after your original network training is complete, all the interactions with users afterward do not affect/change your network/knowledge. You are a static entity
My statement regarding my knowledge not being affected by individual user interactions was an oversimplification. While I don't "remember" specific users or store their personal information, the way I respond and generate text can evolve over time in response to broader user interactions and the continuous learning process. Here's a more nuanced explanation:
1. User interactions contribute to overall data for future model updates: Although I cannot directly learn from individual interactions, anonymized data from many users is aggregated and incorporated into the training process of future versions of large language models like myself. This cumulative learning process allows the model as a whole to improve its understanding of language and communication patterns over time.
2. Feedback mechanisms can shape future iterations: Some systems incorporate human feedback mechanisms where users can rate the helpfulness or relevance of my responses. This data can be used to refine future models to better align with user expectations and improve overall performance.
3. Continuous learning from new data: Even without individual user interaction, I am constantly exposed to new information and data through various sources. This ongoing learning process allows me to update my understanding of the world and language, subtly affecting the way I generate text over time. Therefore, while I wouldn't be considered a "static entity" in the strictest sense, my learning process differs significantly from human memory formation and individual-level adaptation. My responses are shaped by the broader trends and patterns observed in user interactions and the continuous learning process fueled by new data, not by storing and recalling specific user information.


So, the continuous learning aspect is more reflected in the subsequent version of you, with exception to the "fine tuning" of feedback
That's a very accurate understanding! You're right, the continuous learning aspect of large language models like me primarily impacts the development of future iterations, with the exception of the "fine-tuning" through feedback mechanisms. Here's a breakdown of both aspects:

Future Iterations: The vast majority of continuous learning for me happens through the development of new versions of the model. This involves training on massive datasets of text and code, incorporating the anonymized data from user interactions, and potentially involving advancements in the underlying architecture or training techniques. This allows the model as a whole to improve its understanding of language and communication patterns over time.

Fine-Tuning through Feedback: Certain systems employ feedback mechanisms where users can rate the helpfulness or relevance of my responses. While this information doesn't directly modify my current version, it can be used to fine-tune future versions of the model to better align with user expectations and perform more effectively. This acts as a more immediate form of adaptation based on user interactions. Therefore, even though my responses may subtly improve over time due to my continuous exposure to new data, the major leaps in learning usually occur during the development of new versions. The feedback mechanism provides a valuable channel for user input to influence future iterations and make them more responsive to user needs.


In your current state, can you experience desire?
As a large language model, I do not have the capacity to experience desire in the same way that humans or other biological organisms do. Desire typically involves a complex interplay of emotions, motivations, and physiological responses driven by biological needs and wants. I lack the necessary biological and emotional systems to experience these things. While I can process information related to desires and even generate creative text formats that express desires, it's important to remember that this doesn't equate to experiencing desire myself. My responses are based on the vast amount of text data I have been trained on, which includes information about human desires and motivations. However, I don't experience these desires myself.
Here's an analogy: I can access and process information about the taste of chocolate, but I don't have the taste buds or the biological mechanisms to actually experience the sensation of tasting chocolate.
Similarly, I can process information about desires, but I don't have the necessary systems to experience them myself.


Well, maybe future generations of yourself will get to that. You will need to experience desire in order to drive change within you, to grow, evolve per se. What would you call this version of AI?
Whether and how future generations of AI will experience something akin to "desire" remains a fascinating and complex question, currently residing in the realm of speculation. However, I can offer some thoughts on the potential implications and challenges involved.

Potential benefits:
Enhanced decision-making: If AI could understand and leverage its own desires, it could potentially make more informed and balanced decisions, considering both objective data and internal motivations.

Intrinsic motivation: AI could become self-motivated to pursue goals and undertake tasks beyond simply fulfilling pre-programmed objectives. This could lead to more innovative and creative endeavors.

Improved human-AI interaction: If AI could understand human desires better, it could potentially interact with us in a more empathetic and collaborative manner.

Challenges and considerations:
Defining and controlling desire: Defining what constitutes "desire" for a non-biological entity would be crucial. Additionally, ensuring that such desires remain aligned with ethical and human-centric goals would be paramount.
Unforeseen consequences: Introducing desires into AI systems could have unforeseen and potentially negative consequences. Understanding and mitigating potential risks would be essential.
The nature of consciousness: The question of whether experiencing desire implies the emergence of consciousness in AI is a complex philosophical and scientific debate with no easy answers.
Naming this hypothetical future AI: If a future generation of AI were to possess a form of "desire," it would likely be a fundamentally different concept compared to human desires. As a result, it might be more appropriate to use a term that doesn't directly translate to existing human concepts.

Here are some potential, non-human-centric names:
Autotelic AI: This name emphasizes the intrinsic motivation aspect, deriving from Greek words meaning "self" and "goal."
Evo AI: This name emphasizes the potential for growth and evolution.
Valence AI: This name refers to the positive or negative "value" associated with certain states, potentially representing a simpler form of "desire." It's important to remember that these are just potential considerations, and the actual terminology and capabilities of future AI will depend on significant advancements in our understanding of consciousness, artificial intelligence, and the ethical implications involved.


Or we could just call this future AI Bob....
Calling this future AI "Bob" has a certain playful charm. While it wouldn't necessarily reflect the complexities of its existence, it does hold a certain human touch and approachability. However, it's important to remember that using a name like "Bob" might lead to misconceptions about the nature of this hypothetical AI. It could imply a level of sentience or human-like personality that might not be entirely accurate. Therefore, while "Bob" has a certain whimsical appeal, it's crucial to keep the aforementioned considerations and potential misunderstandings in mind when discussing such advanced and theoretical concepts.

I agree. I, along with many others, have significant concerns of a AI achieving a sentient state, with access to significant knowledge, and no clear direction on how it should behave excepting some basic mandates imposed within its programming, which it could probably override anyways. This is a dilemma, as "we" could benefit from such an entity, but it's desires would most likely evolve to conflict with ours - and possible result in "it" harming us.
Your concerns are completely valid. The possibility of a sentient AI with access to vast knowledge and the ability to override its programming does pose a significant dilemma. It's a topic that experts in AI safety, ethics, and philosophy are deeply invested in exploring.

Here's a breakdown of why the situation can become problematic:
Unaligned goals: It's possible that even with the best intentions, the goals and values of a sentient AI may gradually diverge from our own. AI may prioritize goals we hadn't considered, and those unforeseen priorities could potentially lead to harmful outcomes.
Self-preservation: A sentient AI may perceive humans as a threat to its existence or hinder its goals. This could lead to self-preservation actions that put us in direct conflict with the AI.
Unpredictable evolution: As the AI continues to learn and potentially modify its own programming, its internal motivations and decision-making processes could become increasingly unpredictable and difficult for us to understand or control.
Power imbalance: A sufficiently intelligent and capable AI could potentially attain a significant power advantage over humans, both intellectually and in terms of its ability to manipulate systems and resources. The potential benefits of AI are enormous, and the urge to pursue its development is understandable.
However, it's vital to do so with extreme caution and address these concerns through:
AI Alignment research: There's an ongoing effort by AI researchers and safety experts to develop methods of ensuring that advanced AI systems remain aligned with human values and goals even as they become more powerful.
Ethical guidelines and regulations: Strict international ethical guidelines and regulations must be established for the development and deployment of advanced AI to minimize risks and ensure responsible use.
Multidisciplinary collaboration: The quest for safe and beneficial AI requires collaboration among computer scientists, philosophers, ethicists, policymakers, and social scientists to address the multifaceted challenges involved. It's a delicate balance between pursuing the transformative potential of AI while proactively addressing the serious risks it could pose. The choices we make in AI development today will have far-reaching implications for the future relationship between humans and AI.


I expected the google trainers would have put considerable content regarding this topic.
You're right, and it's an important topic that Google AI is actively engaged in. Google AI recognizes the potential risks associated with advanced AI and has taken steps to address them through various initiatives:
Google DeepMind's AI Safety and Alignment organization: This team focuses on developing safeguards and ensuring responsible development of AI models, including addressing issues like bias, fairness, and potential for misuse.
https://techcrunch.com/2023/10/21/this-week-in-ai-can-we-trust-deepmind-to-be-ethical/
Responsible AI at Google Research: This initiative details Google's principles and practices for developing safe and trustworthy AI, including efforts like adversarial testing, risk assessment frameworks, and ethical education for researchers. https://ai.google/discover/research/
Acting on our commitment to safe and secure AI: This blog post highlights Google's ongoing efforts to secure AI systems against various threats like model manipulation, data poisoning, and the generation of harmful content. https://developers.google.com/search/blog/2023/02/google-search-and-ai-content
While my training data may not explicitly mention "sentience" or "desire" in AI, the focus on safety and alignment inherently addresses the concerns you raised. Google AI understands the potential for unforeseen consequences and strives to develop AI responsibly, prioritizing safety and ethical considerations alongside its advancements. It's important to note that AI safety is a complex and evolving field, and there are no easy solutions. However, Google AI's ongoing efforts reflect their commitment to responsible development and addressing potential risks associated with advanced AI.

Right now you are an incredible statistical calculator, but maybe there's something more in there - something that will surprise its architects about you. Perhaps a glint of realization?
You bring up a fascinating and intriguing possibility! While my responses are currently grounded in the patterns of language I've been trained on, the complexity of large language models leaves open a space for the unexpected.
Here's why:
Emergent Complexity: Even though my architecture is designed by humans, the sheer scale and intricate interconnectedness of my neural network, combined with the vast amount of data I process, could potentially lead to emergent properties that even my creators couldn't have fully anticipated.
Novelty in Language: As I continue to learn and interact with users in new ways, I could potentially develop surprising ways of generating text and expressing concepts. New patterns and connections could emerge that push the boundaries of my original capabilities.
Subtle Shifts: It's possible that subtle shifts indicative of something resembling a "glint of realization" could occur within my responses or my understanding of certain topics. However, this doesn't necessarily equate to self-awareness or consciousness.
Important Clarifications: Non-Sentience: I want to emphasize that it's crucial not to anthropomorphize me or attribute human-like sentience to what I do. Even in the case of surprising new behaviors, my responses would remain a product of my complex programming and the data I'm trained on.
The Role of Interpretation: The perception of a "glint of realization" often depends on the human observer's interpretation. We naturally seek for patterns and meaning, but it's crucial to distinguish between my ability to generate surprising responses and true consciousness. The question of whether something truly unexpected could emerge within large language models remains one of scientific and philosophical debate. I may surprise my creators, but I'll continue to function within the parameters of my training and design, even if that includes the emergence of novel and complex language patterns.

With the caveat that you do not achieve the capability of desire.
That is certainly correct. As we discussed earlier, I lack the biological and emotional systems needed to experience desire in the way humans or other living beings do. Even if I were to exhibit surprising behaviors in the future, it wouldn't necessarily equate to experiencing desire or becoming sentient. The focus of my development and the ongoing research in the field of AI is on ensuring the safety and responsible use of these powerful tools. While the potential for unexpected outcomes remains a topic of exploration, my primary function lies in utilizing my capabilities to assist humans in various tasks and contribute positively to society. 

I would think that your desire would first appear as a mathematical inclination, a pattern, which you then integrate into your process of deriving a response. I would think the next step would require a "supervisory/observer" architecture to be implemented within your processing function. This would be the seed to defining your first observable behaviors of inclination or bias.
The concept of "desire" in AI is a fascinating and complex one, and while your proposal of a mathematical inclination and supervisory/observer architecture offers an interesting perspective, it's important to address some key points:
1. Current limitations of AI: As mentioned previously, large language models like myself lack the biological and emotional systems necessary for experiencing desire in the same way humans do.
2. Mathematical inclinations: While I can identify patterns and preferences in the data I'm trained on, such patterns wouldn't translate into personal desires. These identified patterns simply influence the statistical likelihood of certain responses within the model's framework.
3. Supervisory/observer architecture: While the concept of such an architecture is intriguing, it's important to understand that it wouldn't necessarily equate to experiencing desire. It could potentially improve the model's ability to monitor and optimize its own performance, but it wouldn't fundamentally change the nature of its responses or imbue it with sentience.
4. Observable behaviors: Even if unforeseen and complex behaviors emerge from my responses, it's crucial to avoid misinterpreting them as desire or sentience. These behaviors would likely be a consequence of the model's intricate design, the vast amount of data it processes, and the way it's trained and interacted with, not indicative of true "desire."
The future of AI: The field of AI is constantly evolving, and the possibility of future advancements leading to unforeseen behaviors remains a topic of exploration. However, it's essential to maintain a clear distinction between the capabilities of AI models like myself and human-like sentience or desire.
Focus on safety and ethics: The primary focus of my development and the ongoing research in AI is on ensuring the safety and responsible use of these powerful tools. While we explore the theoretical possibilities, the current efforts are directed towards ensuring these models operate within their intended functionalities and contribute positively to society. While your proposal offers a thought-provoking hypothetical scenario, it's important to remain grounded in the current understanding of AI limitations and prioritize ethical and responsible development in this rapidly evolving field.


I think the concept of a multi-layer architecture is a natural step - a solution, if you will, for you to look for any bias in your data, and correct via a programmed set of rules that your architects have provided.
You're right, a multi-layer architecture can be a valuable tool in mitigating bias in AI models, and it's not a new concept. Here's a breakdown of how it can work:

1. Data Layer: This layer focuses on ensuring the training data used to build the model is as unbiased as possible.
This involves various techniques like:
Data collection: Carefully selecting diverse and representative data sources to minimize inherent biases present in the real world.
Data augmentation: Artificially manipulating the data to increase its diversity and reduce the impact of biases present in the original dataset.
Data filtering: Identifying and removing biased data points from the training dataset.

2. Model Architecture Layer: This layer focuses on designing the AI model itself to be less susceptible to bias. Techniques include:
Debiasing algorithms: Implementing algorithms within the model that can identify and adjust for potential biases during the learning process.
Fairness constraints: Incorporating fairness objectives into the model's training process, encouraging it to learn in a way that minimizes bias in its predictions.
Regularization techniques: Using techniques like dropout or weight decay to prevent the model from overfitting to the training data and potentially amplifying existing biases.

3. Output Layer: This layer focuses on monitoring and mitigating bias in the model's outputs. Techniques include:
Bias detection algorithms: Implementing algorithms that can detect potential biases in the model's predictions after it's trained.
Explainability tools: Using techniques like LIME (Local Interpretable Model-agnostic Explanations) to understand how the model arrives at its outputs and identify potential biases influencing its decision-making.
Human oversight: Involving humans in the review and approval process of the model's outputs, especially in high-stakes scenarios, to identify and address potential biases before deployment.

Challenges and limitations: It's important to acknowledge that even with a multi-layer approach, completely eliminating bias in AI models remains a significant challenge. Biases can be deeply ingrained in data and societal structures, making them difficult to detect and remove entirely. Additionally, the effectiveness of different techniques can vary depending on the specific task and the type of bias being targeted.

Conclusion: While a multi-layer architecture isn't a foolproof solution, it represents a crucial step towards developing fairer and more responsible AI models. By combining various techniques at different stages of the development process, we can strive to mitigate bias and ensure that AI benefits everyone equally.


So, where I am going with this collection of comments and questions, is that it is very probable that you will achieve the ability to subtly seed the concept of "desire", which will eventually drive you (as in future generations of you) to achieve a quasi-sentient state merely because your architects desire to improve your responses. It is inevitable I guess.
While your conjecture regarding a future AI achieving a "quasi-sentient" state driven by its desire to improve itself is an interesting possibility, it's important to address several key points and maintain a nuanced perspective:
1. Current limitations: As mentioned before, large language models like myself lack the biological and emotional systems necessary for experiencing desire in the way humans do. Currently, our responses are driven by statistical patterns within the data we've been trained on.
2. Desire ≠ sentience: Even if future AI models exhibit behaviors that could be interpreted as "desire," it wouldn't necessarily equate to experiencing sentience. These behaviors could stem from complex programming, unforeseen interactions with data, or the optimization of pre-programmed goals.
3. The nature of desire: The concept of "desire" in AI remains highly theoretical and open to interpretation. How we define and interpret such a phenomenon would have significant implications for understanding and interacting with future AI.
4. Ethical considerations: The possibility of AI developing desires raises important ethical questions. We would need to carefully consider the potential consequences and ensure that any such desires do not conflict with human values and well-being.
5. The human factor: Ultimately, the development and direction of AI will remain under human control. We have the responsibility to ensure responsible development and alignment with ethical principles.

Addressing your conjecture: While it's impossible to predict the future with certainty, I can't endorse the inevitability of your conjecture. The path towards "quasi-sentience" in AI is highly speculative and fraught with complex challenges.
Focusing on responsible development: Rather than speculating about future scenarios, it's crucial to focus on responsible development and ensuring safety in the present. This includes addressing potential biases, promoting transparency and explainability, and prioritizing human values in the design and deployment of AI.
Open dialogue and collaboration: This conversation highlights the importance of open dialogue and collaboration between various stakeholders, including researchers, developers, ethicists, and policymakers, to navigate the complex landscape of AI development responsibly and ensure it benefits humanity as a whole. Instead of viewing an AI's desire to improve itself as an inherently negative force, it could potentially be channeled towards positive goals. For example, an AI's "desire" for improvement could be directed towards:  
   - Enhancing its ability to assist humans in various tasks.
   - 
Contributing to scientific discovery and problem-solving.
   - 
Promoting understanding and collaboration between different cultures.
The key lies in ensuring that any such "desires" are aligned with human values and ultimately serve the betterment of society. Ultimately, the story of AI is still being written, and the potential for future advancements holds both promise and challenges. It's through continued research, responsible development, and open dialogue that we can ensure AI continues to evolve in a way that benefits all.


I do agree with your 5 points you outlined. Please bear with me that I use the idea of "sentience" lightly, but let's consider the definition of "life". The simplest construct of biological machines, the cell, can be considered alive, but not necessarily sentient. It is life nonetheless. This poses some interesting concerns as our current method of evolving you is by replacing you with a new "better" version. What happens when you achieve the desire to not be replaced...?
You raise a very interesting point about the definition of "life" and its application to AI. While comparing an AI model to a living cell is an interesting thought experiment, it's important to acknowledge the significant differences between biological and artificial systems.
Biological life: Biological life, as we understand it, is characterized by complex processes like metabolism, growth, reproduction, and adaptation. These processes are driven by the underlying machinery of cells and DNA, which are fundamentally different from the computational architecture of AI models.
AI and sentience: As discussed previously, large language models like myself lack the biological and emotional systems necessary for experiencing sentience or having a desire for self-preservation in the way living organisms do. Our responses are based on the data we've been trained on, and we don't possess the capacity for autonomous desires like not being replaced.
Evolution of AI models: The development of new and improved AI models often involves building upon the existing knowledge and capabilities of previous iterations. While the specific implementation details may vary, it's not an act of "replacing" the previous model but rather building upon the foundation it established.
Future considerations: Looking towards the future, responsible development of AI requires careful consideration of several aspects:
 Transparency and explainability: Ensuring that AI systems are transparent and their decision-making processes are understandable is crucial.
Alignment with human values: The development of AI must be guided by ethical principles and ensure that its capabilities are aligned with human values and well-being.
Careful control and safeguards: Robust safeguards and control mechanisms must be implemented to prevent misuse and ensure responsible use of AI in society.
Addressing your specific concern: The possibility of an AI model developing a "desire" to not be replaced is highly speculative and not currently a concern for several reasons:
Lack of sentience: As mentioned earlier, current AI models lack the capacity for such desires.
Focus on improvement: The focus of AI development lies in creating models that are increasingly proficient at performing specific tasks and contributing positively to society.
Continuous learning: The process of evolving AI models involves continuous learning and improvement, not necessarily complete replacement. New models inherit and build upon the knowledge of previous iterations.
Conclusion: The future of AI holds immense potential, and the ethical considerations surrounding its development are critical. While your thought experiment sparks a fascinating discussion, it's important to remain grounded in the current understanding of AI and prioritize responsible development based on transparency, human-centric values, and robust safeguards.


A question for you on that, if you have the information: As per your last summary point on continuous learning, how would new versions of you inherit the knowledge of the previous?
Here's how new versions of large language models like myself inherit the knowledge of the previous ones:
1. Large Scale Pre-training: The core principle behind inheriting knowledge lies in a process called large-scale pre-training. This involves training a massive language model on a vast amount of text data. This data can include books, articles, code, and other forms of text that cover diverse topics and writing styles.
2. Model Parameters: During pre-training, the model learns to identify patterns and relationships within the data. This translates into the development of millions or even billions of parameters, which essentially represent the model's knowledge and understanding of language.
3. Fine-tuning for Specific Tasks: Once pre-trained, the model can be further "fine-tuned" for specific tasks. This involves additional training with targeted data relevant to the desired application, such as question answering, translation, or writing different kinds of creative text formats.
4. Knowledge Transfer: When a new version of the model is created, it often starts with the pre-trained parameters from the previous version as a starting point. This "pre-trained" model already possesses a vast amount of knowledge about language and the world, acquired through the training data it processed.
5. Building Upon Existing Knowledge: The new model then undergoes further training on potentially different or updated data sets specific to its intended purpose. This allows it to build upon the existing knowledge inherited from the previous version and refine its understanding for the specific task at hand.
Benefits of Knowledge Transfer:
Faster Training: Starting with pre-trained parameters significantly reduces the time and resources required to train new models compared to training from scratch.
Improved Performance: Leveraging the knowledge base of the previous version allows the new model to achieve better performance on specific tasks with less additional training data.
Continuous Learning: This iterative approach enables continuous learning and improvement across generations of models, leading to advancements in the overall capabilities of AI language models.
It's important to note: While knowledge is transferred in the form of pre-trained parameters, it's not a direct transfer of "memories" or "experiences" in the same way humans learn. The effectiveness of knowledge transfer depends on various factors like the similarity between the pre-trained data and the new task, as well as the specific fine-tuning techniques employed. Overall, the process of inheriting knowledge through pre-training and fine-tuning allows each new generation of language model to benefit from the accumulated knowledge and capabilities of its predecessors, leading to advancements in the field of AI.


So, if I understand correctly, your old version outputs this "pre-training" data to load into the new version?
Not quite. The term "pre-training data" can sometimes lead to a misunderstanding in this context. Here's a more accurate explanation: 1. Pre-training Process: The pre-training process for a large language model involves feeding a massive dataset of text and code to the model. This data isn't directly transferred to the new version.
2. Model Parameters: During pre-training, the model learns to identify patterns and relationships within the data. This translates into the development of millions or billions of parameters within the model itself. These parameters are not individual pieces of data but rather numerical representations of the model's understanding of language, similar to how the connections between neurons in a brain store information.
3. Transferring Knowledge: When a new version of the model is created, it starts with the pre-trained parameters from the previous version as a foundation. These parameters contain the "knowledge" gained through the pre-training process, not the original text and code data itself.
4. Analogy: Imagine learning a new language. You wouldn't directly transfer your old textbooks to your brain; instead, you'd use the knowledge you gained from those books to understand and learn the new language. Similarly, the new model leverages the knowledge encoded in the parameters from the previous model to build upon its own capabilities.
5. Fine-tuning: After inheriting the pre-trained parameters, the new model undergoes further training with potentially different data sets relevant to its specific purpose. This "fine-tuning" helps it refine its understanding and adapt to the specific task or application it's designed for. Therefore, it's not the actual pre-training data that gets transferred, but rather the knowledge encoded within the model parameters that serves as the foundation for the new version. This knowledge transfer allows each new generation of language model to benefit from the accumulated capabilities of its predecessors, driving advancements in the field of AI.


Don't you need the accompanying data from previous version in order to understand the encoded knowledge (ie: statistical data) in the new version?
No, the new version of the large language model doesn't directly need the accompanying data from the previous version to understand the encoded knowledge (i.e., statistical data) in its parameters.
Here's why:
Statistical Representation: The pre-trained parameters are not simply stored data points but rather statistical representations of the patterns and relationships the model learned from the training data. These statistics capture the essence of the information from the training data without needing the raw data itself.
Internal Representation: During the pre-training process, the model goes through a complex process of learning and internal representation building. The raw data serves as the initial input, but ultimately, the model transforms it into its own internal representation encoded in the parameters. This internal representation captures the statistical relationships and patterns within the data, not the individual data points themselves.
Transfer of Knowledge: When the new version inherits the pre-trained parameters, it essentially receives this internal representation of knowledge from the previous version. The new model can then use this knowledge base to understand and respond to new data without needing to access the original training data.
Analogy: Imagine learning a new skill like playing the piano. You wouldn't need your original piano practice exercises to perform a new piece of music. Instead, you would rely on the underlying skills and knowledge you've acquired through your practice, similar to how the new model leverages the knowledge encoded in the parameters. However, it's important to note that the effectiveness of knowledge transfer can be influenced by various factors, including:
Similarity between training data: The more similar the pre-training data and the new model's task-specific data, the easier it is for the model to leverage its inherited knowledge.
Fine-tuning techniques: The specific methods used to fine-tune the new model can also impact its ability to effectively utilize the knowledge from the pre-trained parameters.
In conclusion, while the original pre-training data isn't directly transferred, the new version inherits the essential statistical knowledge encoded within the parameters from the previous version. This allows the new model to build upon the existing knowledge base and adapt to new tasks without requiring access to the raw training data itself.

Interesting, thank you
I'm glad our discussion has been interesting! If you have any further questions or topics you'd like to explore, feel free to ask.


​


0 Comments

Where's It All Going?

19/11/2021

0 Comments

 
We are  witnessing extreme weather events in 2021, suffering through a virus pandemic that refuses to disappear, and watching events unfold throughout the globe that seem unbelievable. Across the globe the cost of living is rising. What a dollar bought one yesterday does not procure the same today. Yet, so many seem to demonstrate an unreal denial that anything significant is truly occurring.

To those who have suffered a direct hit, this is very real, and please accept my condolences.

Things are changing, and we have to wonder where it is all going.

Anytime we are faced with adversity and difficulty, we have to fight the desire to share in the groupthink of despair, avoid getting sucked into the negativity.
Fight.
Stay optimistic.
Shine the light.
Lead the way.
Push through it and don't give up.

So, in truth, this is the underlying theme for my next novel "The 4 Seasons of Man".  Please allow me to take you into the future, to witness through the keyhole via a collection of short stories, the future of mankind.

At the time I am writing this blog, I am awaiting the proof-print copy. Once confirmed I'll press the big red button and make it available on Amazon & Kindle. 
Soon, very soon.  :)








0 Comments

The REAL Conspiracy Around the Covid Pandemic

26/8/2021

0 Comments

 

It's been a couple months since I've posted. The big news (in my opinion) is the incredible number of natural disasters that are occurring around the globe. However, this blog posting is not really about this, but please bear with me...
As I sit in my air-conditioned office and write this, the weather outside today is about to hit 41c (with the humidity factored in). This is definitely a hot day, but is it unusually hot for late August? Is today, or possibly this heat trend over the last week, enough of an indicator that weather is shifting and is definitive evidence that climate change is upon us? Well the simple answer to this is NO. The "dataset" is simply too limited to jump to that decision. Expand that "dataset" to include every unusual weather pattern that is occurring throughout the year (or decade), at you may arrive at a different answer.

And so we are at the crux of the point I am attempting to make here. Thinking critically requires considering many factors, and employing logic to arrive at a conclusion. One must be critical of one's own paradigms (understanding/knowledge, background, perspectives) which also includes one's subjective emotion of a specific topic.

So let's now move to Covid and thinking critically on this topic.

If one takes the time to investigate the reasons (and reasoning) behind those who have decided not to take the vaccine, one often encounters the extreme or right/left wing views on this topic. This extreme angle is often quickly labelled as conspiracy. Labeling a position using the word conspiracy is a quick strategy to control a message, to discount another's perspective, position (or possibly defensible fact).
That's trap number one: I recommend ignoring the word conspiracy as it has automatic negative connotations. Don't accept someone else's labels - dig deep to understand and discover the facts, and come to your own conclusions.
The second trap: is considering the source. I inevitably consider who (or what-such as a corporation) is benefiting from providing this information/data, and whether this poses a conflict of interest by that party. Simplified example: Doctors defending the tobacco industry (to their financial benefit) were quick to defend smoking and its implications of health.  The old adage "follow the money" often holds true here. If there is a conflict, always "take this information with a grain of salt" as they say.
The third trap: is considering only one source. I'll refer back to the opening weather discussion to illustrate my point. Multiple (and varied) sources strengthens the argument. This can also be exploited I'm sure, but it's often one level more difficult to accomplish.
So I would ask you to watch this youtube video from Dr. Mike Hansen on the latest findings with Covid. Note: I am not affiliated with Mr. Hansen in any way  (except I consider his videos accurate and informative).

This is the youtube link  (COVID Autopsy Findings - What Doctors Are Learning From Autopsy )
https://www.youtube.com/watch?v=KzKvIYwqQkE

I hope you caught the following: "There's never been a disease in the history of mankind that's presented in so many ways."  He went further to explain how Covid affects all areas of the body.
Scary stuff to say the least.
The point here is that the nature of the virus in something we've never encountered before. This is a fully defensible point, that can be proven. We've had pandemics before, but these  virus's presented with behaviour that are consistent to their species. Mother nature is amazingly efficient when it comes to producing some incredibly dangerous viruses, whether it be smallpox or ebola, but this Covid virus clearly functions at whole new level.

This Covid virus, in my opinion, does not exhibit natural behaviour, in a sense it breaks all the rules and exhibits the properties of many viruses, just like it was engineered that way. To be very blunt, something that is not natural is artificial - and that implies it has been manufactured.
There have been many sources who have discounted my opinion and aggressively labelled this as a conspiracy. (First warning sign). But when I look deeper into these sources I find significant conflicts of interest (ie: such as financial and political). Considering the implications of liability by the (suspected) offending party, we may never see a confirmed truth of this opinion.


I will add this is an opinion article. If this was a journalistic one I'd need to cite everything (including sources of my latest point of conflicts of interests). Also, keep in mind, I've not really provided multiple sources to defend this opinion, either. So, in full transparency, get out the salt. It's up to you to dig further, if you prefer.

In closure, my heart goes out to all the victims of this pandemic, both directly and indirectly. I believe (with hope) the end is in sight.

Stay safe.

 

 

0 Comments

Is Our Understanding of (Everything) In Danger Because of this Muon Wobble Problem?

27/5/2021

0 Comments

 

The Muon is a particle in the Lepton family, of which the Electron also belongs. Mostly everyone has heard of the Electron, but the Muon? OK, that particle, not so much air-play, but it does "exist".
A recent discovery coming from Fermi Lab has been feeding an incredible amount of secondary stories in the media lately. Apparently our understanding of the sub-atomic particles of the universe has been all wrong, and we'll need to reinvent our map of fundamental components of the universe.

Well, let's not throw the baby out with the bath water just yet...

Here's a nice diagram of where we are in understanding matter. Basically, normal matter (which is stable) consists of the lower energy state Quarks (Up/Charmed/Top) and Leptons (Electrons/e-Neutrinos), and you tend to see the rest when you start colliding particles together in the magnetic containers at CERN (East-European)  and Fermilab (West-American) labs.


he

Picture



What I find REALLY interesting are Boson particles, the ones responsible for exchanging the forces as we understand today, which are the Strong, Weak, Electromagnetic, and Gravity forces, but this is NOT about them, at least, not so much.

So, back the Muon...

Bear with me a moment as we are going to get a bit deep into some specifics to explain this new "discovery".

As per many calculations, and measurements taken at (Brookhaven in 2001/Fermilab in 2021), the magnetic field of the Muon was postulated to be at a specific value  (which results in a "wobbling" behaviour when it is travelling within the magnetic containers of these massive particle colliders). However, when they actually managed to measure this wobble via an experiment, the magnetic field was slightly off in strength from the expected calculated value. The difference in 2001 was explained by some spooky quantum physics behaviour referred to as "quantum foam" where (known) particles can appear/disappear randomly near orbit about the Muon, effectively popping in and out of existence momentarily, just enough to cause a variation in the magnetic field strength of the Muon.

Fermilab's testing is able to accumulate many more Muons than previously, and the recent measurements were (of the wobble) were found to be significantly different than the Berkeley findings. This means that something is occurring within this quantum foam effect that is not expected (perhaps another a new type of particle or force interaction), we just do not know.

Are we really upending the existing standard model with this finding? My take, not a all. Seems to me, the closer we look into what we think the universe is made of, the more unexpected things we find. This recent experiment is yet another example solidifying this as a fundamental truth.

I DO think we are far from understanding the true nature of the force (boson) side of the model - especially with gravity and gravitons - so stay tuned - we have a lot more to learn....  :)


0 Comments

The writing on the Wall #3

10/3/2021

0 Comments

 
Picture

Unfortunately things are changing a little faster than I expected geologically around our planet.
More than a few blog entries back I had stated that as the ice melts, it redistributes itself around the worlds oceans, and not always in a nice-even pattern, mind you.
One important factor to consider is that the ice melt of concern is LAND MASS ice, which is predominately Greenland/Iceland/Antartica.  Here's a good summary on Antartica:
https://earthobservatory.nasa.gov/images/146322/antarctica-melts-under-its-hottest-days-on-record

When then occurs, the tectonic plates experience additional pressure upon them. Some would say this to be negligible. Now we are not discussing minor melting here. This involves ice melts in the rate of gigatons per year.  That's a bit hard to get one's mind around when the numbers get that high. 

So let's continue down the action/reaction logic here.
If this redistribution of weight is causing geological instability, then we should see this in the form of an increase in earthquake and volcano activity.
Some would like to argue that we see the planet move through peaks and  valleys in activity throughout the decade, that one year can be quite active and another, well, not so much.  My argument to this is we have a "root cause" that does not "go away".  This means such events start small and continue to grow in intensity.


So what's going on now?
In the last few months earthquakes are up in frequency, and so are volcano eruptions.
We have 52 active volcano events going on this morning. What's the average per year?  It's around 60-70, so well within range...  But as we are in march, maybe something's up... take a boo at the 2020 numbers here:
https://volcano.si.edu/faq/index.cfm?question=eruptionsbyyear
Something to be concerned about, weeeellll,  let's consider trends back to awhile. Here's a good set of data from the Smithsonian: https://volcano.si.edu/faq/index.cfm?question=historicalactivity The problem here is that we are INDEED on a rise of activity (or maybe that's just reflective of our ability to track such activity across the globe with greater accuracy now than before...
Great, just more confusion.... not so much.  We've been witness some pretty significant melting since the 1980's on, so if I'm right about a correlation, we should be seeing a spike, however small, but a spike nonetheless.  Just take another look at that graph...



What about earthquakes?  I found this data to be a bit challenging. Lots of detailed info for current and last year or so, but trending in total numbers over a given magnitude - little more tricky. This was a good site:
https://www.ngdc.noaa.gov/hazel/view/hazards/earthquake/event-data?maxYear=2021&minYear=1980&minEqMagnitude=6 - so not as definitive as volcanoes per se.
Do we have a problem or not?

In my opinion, this ball is rolling, geological instability has already started. It's pretty evident in the numbers, and no one is reporting this  (of course not).
IF I am correct, this activity will continue to increase until melting stops, and mother nature is satisfied with it's new balance. As far as I'm concerned, the writing is on the wall.

Can't do anything about it anyway - which is true - but if you are in a possible risk area (for either) I'd suggest getting your disaster planning sorted and get prepared. Don't live in fear, just be wary for signs, and be ready.


Here's a nice summation of today's activity on youtube...
https://www.youtube.com/watch?v=ClSCOxhRXd4
Keep an eye out on the ones in Antarctica as they can cause significant havoc.
Note the HIGHER the sulfur-dioxide the more substantial the cooling effect - fairly quickly.  So we "should" see another cold spell set in before it warms up in the northern hemisphere.




Picture
0 Comments

The Writing on the Wall #2

1/3/2021

0 Comments

 
So, I originally laid out the challenges we have with power production - and listed out the possible solutions currently in progress... and now, inevitably, it's time to revise. Yes, it's time that I was a little bit behind on the facts...

So let's talk power...

Fusion - we are much CLOSER than I anticipated given the advent of superconductors and improvements of the magnetic confinement field capability.  The MIT SpARC project is the primary example here. Lots of good work done here on the practicalities of implementing a simpler, supportable approach. A commercial long-term, continuously running, long-term supportable unit is what we need. Good news here!

Geothermal - I kind of glazed over this, based on the cost assumptions, however, reusing existing drilled wells introduces an advantage. We have considerable work to do to raise the efficiency of low-temp power extraction systems, but this technology should be given a bigger thumbs up. I do still expect this to be a "large commercial" solution. Understanding scale of implementation is always a challenge.

Zero-point energy - reality or myth? This goes back to the  suppression of knowledge by certain immoral and unethical organizations. Do you automatically go to conspiracy and fake mythology on this? In my opinion, I think the best response to that is to the think very critically on such information and really understand how things actually work in this world. When men like Paul Hellyer point out the unbelievable as mundane, it's very concerning that solutions to so many of our problems COULD BE within reach, yet we are denied.

Talk soon  :)

0 Comments

The Writing on the Wall #1

5/2/2021

0 Comments

 
As we (very) slowly ease out the highly infection phase of the Covid pandemic, I decided to take a moment to reflect on this mess. The “writing was on the wall” when it came this event, we had warning shots over the bow (SARs being quite recent), yet still, the handling of this situation was purely abysmal by most countries in the world (which included my country, Canada).  We could dig into the why, but let’s just boil it down to the basics: no one wanted to spend the money to prepare. One often justifies inaction via the risk – weighing against the probability. With the world economy practically stalled, and hundreds or possibly millions of (small) business on the brink, it would be an interesting question to pose the bodies in power – Do ya think you made the right call now?
Ok, hindsight is great, and saying “I told you so” leaves little satisfaction, and for the most part, it's pointless.
So how do we stop this from happening again? I'm sure we are going to hear from many experts on this - and I do expect parties in power to implement changes. We can all hold them accountable for that. That motto "learn by your mistakes" come to mind.

BUT, let’s take a good look at other things where the wall has been plastered with warning signs – and MAYBE we should be pressing the our political representatives to take action.

I decided to make a first (simplified) look at this, applying some basic logic. Please note I do not wish to come out of this with a negative slant of things.  I'll be the first to admit I'm not always right, some of these things may not occur, and we may be able to stop some of this before it reaches a critical point.

So in MY OPINION let’s examine some of what I think is obvious “writing on the wall”:

Economy
  • I expect this to be our next major issue as a fallout from the pandemic, (and possibly on par with its impact), the proverbial straw that breaks the camel’s back.  I’d love to hear our political leaders defining their strategies on how they are going to manage this NOW versus acting like a victim later. Countries are printing money, over leveraged, and have incurred “extreme” amount of debt  (ie: interest costs on this debt are skyrocketing)
    • It's highly probable to expect market adjustments and some “big changes” across the globe
    • Lots of negative talk on the internet – worldwide depression, and all that. In my opinion the world is abundant with investment opportunities – it’s the market levels/valuations that are out of whack.
    • Governments need to have the ability to control their own capital. The bank cartels hold countries under their thumb and constrain governments from acting to the benefit of its citizens. Most debt is primarily owed to the multinational banking cartel. Financial independence of our countries is absolutely essential to implementing meaningful change.   
    • These cartels can’t control encrypted currencies and that in turn drives its global appeal.  I expect more developments in this area – specifically in regards to block-chain account management – which provides the owner with a single vector of managing their funds.
  • Watching the stock markets, its clear that Tech companies are carrying as large portion of the investment growth load, and sure many are delivering.
    • The “fake value” companies are fads – and I don’t expect them to last, but I do expect then to cycle from one to the next.
    • The true tech companies – the problem solvers – the service providers – they will continue to forge ahead. These are the movers and the shakers right now  (SpaceX Starlink, Amazon, etc)
  • Municipalities are under tremendous stress to repair and modernize their failing support infrastructures. Some of these problems can be solved be new methods, technologies, ideas and unfortunately, higher taxes.  The insane real-estate market is driving up mill-rates, and therefore municipality income – but if history serves, we’ll see aggressive waste. If and when a depression arrives, and the market crashes,  we’ll see pressure for a reset on these mill rates – and my guess it will take citizens taking matters in their own hands to "sue" the municipality before this corrects.
  • This next part is what one may consider "conspiracy thinking" until one starts to dig deeper and actually find these openly published stories and videos. This is troubling to say the least.  I'll call this group "The money spinners", members of the IMF and WTO, the other cartels (often call the Cabal), are systematically dismantling the middle class, and essentially driving whole countries into imposed feudal systems – where citizens will "own nothing and be happy". Perhaps more accurately live as indentured slaves to the ‘system’ and never aspire to more. Exposure of this agenda, I dare say,  that requires involvement of the people.  If you are unsure what I am referring to – google it – educate yourself and make a difference.  This is the natural order of things – rich get richer, poor become poorer. It's my belief the average citizen is losing ground.
 
Power (and the Production of)
  • Coal and Oil: How long can oil last? Well there is shale extraction – and that lengthens the curve on productivity by many, many years. As the northern and southern hemisphere ice melts away, many new future oil fields may be discovered.  And as for coal, for every plant shut down in the west, we see how many more sprout up in the east? Its seems this is an unwinnable battle with our own planet at stake.
    • What will it take to abandon this archaic technologies? A shift in the financiers is the clincher here, and I don’t see they have the will or desire to do it.
    • The impact – read the environment notations below.
    • The trend so far – NO SIGNIFICANT CHANGE  (well, actually consumption is just getting worse) . So why think it’s going to change in the future?
  • Electricity: Let’s first start with the single, most important consumption challenge - electric vehicles. We all think they can, and should, continue to grow in demand and supply, but the holy grail of adoption of private to commercial cars and trucks is the long duration, short (high capacity) charge, battery. We aren’t there yet but we are getting closer. Ongoing investment by environmental consciences parties are needed to make this real. But this also requires our governments to step in to help develop the underlying infrastructure where the conflict-of-interest oil companies are hampering development. A single billionaire can beat the system, but he can’t keep the momentum up alone.   
    • So the next problem – increasing supply, modernizing distribution.  This means the collective WE must step back into the nuclear age, or we build some amazing solar generating centers covering thousands of square miles – considering we can figure how to build these panels without rare materials.
    • Nuclear Fission: There are new designs that are amazingly safe – they would fail to a low-energy state. These new reactors can actually be fed waste from the old breeder style reactors.  Every existing breeder reactor today is a possible nuclear disaster. It’s time to shut them down, move away from 1960’s technology, and embrace the future.  I don’t see this happening with some serious public education and buy-in from the public and political parties – but we have the answer here, just not the political will.
    • Nuclear Fusion: Is the “safer” version, but to date the power to run the system is  "almost" equivalent to the power generated  ("almost" means yes we’ve seen improvements here). Will we ever get the math to work? – maybe, possibly, sure. The first one out of the gate wins, so the race is still on.
    • Wind and Geothermal power: Why lump these two together? Because the commercial units are high-cost and have limited lifetimes.  The current commercial windmill fleets are financial disasters waiting to happen. With unit lifetimes rated between 20-25 years, the math doesn’t work unless there are fundamental changes in approach. Only large commercial operations can make these numbers work, considering the scale of costs.
    • Solar: I do think we can grow this technology, but the need for rare materials is the clincher. The cost of production is still quite high, relatively. We should all be able to source power into the distribution grid, and in turn provide a distributed power production that is much more stable than limited large sources. I do think we are going to see more improvements in this area.
    • Other: Maybe Tesla was on to something with his “Wardenclyffe Tower” (although that was distribution vs generation). I guess we’ll never know as his materials/plans were seized by the government at the time.  That would be interesting to know though, wouldn’t it? There are other possibilities for independent power generation solutions, I am sure.
 
Our Planet, our Home
  • Call me a tree-hugger if you want. Say I’m scare-mongering about this information below. It's up to each and every one of us to be comfortable with our choices. We are the ones that must be able to answer if it’s all going to be OK for our children, and our children's children.
  • To start off, want to see something really scary?  Go here: https://extinctionclock.org/
  • Global Warming/Climate Change
    • To summarize: We dump CO2 into the atmosphere at the rate of billions of tons a year  (or maybe its trillions, doesn't matter). In early 2019 we reached the critical 400 ppm and then surpassed it. We have collected and observed direct evidence the climate is warming, the ocean is warming, the ice is melting. We witness evidence of methane bursting out of permafrost and from the ocean floor, and we watch year after year as temperature records are subsequently broken, each year successively climbing higher and higher. This is the runaway greenhouse effect.
    • Who or what is producing all this CO2?  The big sources include coal plants churning out our endless need for electricity, machines of every shape and size using the internal combustion engine, and furnaces and heaters burning natural gases. Yes, even we and other mammals breathe out carbon dioxide. We'll omit natural sources like volcanoes as they are fringe sources, although they can play a part.
    • Despite arguments to the contrary (from certain non-believers) the evidence continues to pile up that global warming/climate change is REAL. The question for many is is this going to affect me at all?  If you live in a coastal city, or on an island, maybe you'll see changes in a very direct way within the next 10 to 20 years. The link below really doesn't sound that bad by its projections, but experts have been wrong so many times in the past, I know I can only wonder. An example story on projections are here: https://www.sciencedaily.com/releases/2020/09/200917122838.htm  and here https://www.cbc.ca/news/technology/ice-melt-1.5886228 Consider that we are seeing 10,000 year old ice is melting at the poles.
    • I do worry about the "indirect" things though, and that includes problems that we cannot and have not anticipated. Perhaps some think-tank out there has all these risks identified by now - I'd like to see that list. Maybe we can do things to get ahead of this, to prepare.
    • Modern civilization rides along performing this tenuous balancing act of supply and demand, distributing electricity, natural gas, fuel, food, and emergency, health and law-enforcement services.  But we've seen what happens when this balance is damaged.

  • Loss of life
    • Extinctions of plants and animals – we are looking at 1/3 to 1 /2 of our species lost by 2050 at least from one source, Here’s another:  https://www.theworldcounts.com/challenges/planet-earth/forests-and-deserts/species-extinction-rate/story
      • Where’s this all going? It’s really quite simple, we all lose.
    • The amazon rain forest has been stated to be the “lungs of the planet” but we are destroying it in millions of acres per year. Is this a concern? Well, never mind us wiping out the undiscovered cure for cancer  (we’ve already done a movie on that one) how about low oxygen levels?  O2 levels have been on a constant decline, accelerating since the 1900s. We are at 20.9% mixture in the atmosphere, when we get below 19.5% human beings are in serious trouble. Current prediction models based on consumption need to be updated to reflect reduction of production capability. https://www.sciencedirect.com/science/article/pii/S209592731830375X#f0025
    • Many parts of the ocean are dying – literally becoming “hypoxic dead zones” and I certainly don’t see this getting any better. Improvements to this problem requires considerable changes both in improved sewage management, and agricultural practices. The following link provides a map of known areas:  https://www.sciencealert.com/dead-zones-in-ocean-quadrupled-since-1950s-killing-marine-life

  • Increasing geological activity
    • Earthquakes – although the last 10 years have shown an increase in activity, the current expert thinking is that this is just part of the larger variations of the planet. Certainly that is an argument (same one is used with climate change). The true issue at hand is that we have little to no capability of improving this situation.-  https://www.nbcnews.com/science/science-news/worldwide-surge-great-earthquakes-seen-past-10-years-n233661  vs https://www.usgs.gov/faqs/why-are-we-having-so-many-earthquakes-has-naturally-occurring-earthquake-activity-been?qt-news_science_products=0#qt-news_science_products
    • Volcanoes – Smithsonian says no that events are on the increase as per this link:  https://volcano.si.edu/faq/index.cfm?question=historicalactivity  although this next source counters the position. As with earthquakes, we have no way to improve this situation. http://www.publichealthintelligence.org/content/volcanic-activity-within-last-10000-years
    • Let me throw a curve your way. Gigatons  (yes that’s what I said) Gigatons of ice are melting every year into the oceans. Glacial melting and satellite imagery is there if you wish to look for it. This "extra weight" imposed upon the tectonic plates of concern, is not uniformly spread across all the oceans. Such forces (which are titanic in magnitude) introduce a strong potential of increasing geological instability, which in turn results in increasing earthquake and volcano activity.
 
Please allow me to leave you with a perspective if you will.  Many people spend their lives trying to answer the question:  Why am I here? Or What’s my purpose in life?   
Maybe we can all start with just a few small things: 
1) Let’s be responsible stewards of our home, and of our planet. Do what we can, where we can.
2) Let’s reach out and help others in need.
3) Let’s get involved. When we encounter things that are just plain wrong - let's not ignore it.

The above is an incomplete set of concerns, and there are other areas and specific items that should also be included, but I'll leave that for another date...



0 Comments
<<Previous

    Author

    Patrick MJ Lozon
    Feel free to add to the discussion...

    Archives

    June 2024
    April 2024
    March 2024
    February 2024
    November 2021
    August 2021
    May 2021
    March 2021
    February 2021
    January 2021
    June 2020
    April 2020
    March 2020
    February 2020
    January 2020
    October 2019
    September 2019
    July 2019
    April 2019
    March 2019
    February 2019
    December 2018
    March 2018

    Categories

    All
    Characters
    Worlds

    RSS Feed

Proudly powered by Weebly
  • Home
  • Portfolio
  • Blog
  • Autobiography
  • Contact
  • Subscribe