Sabtu, 19 Februari 2022

AMNIMARJESLO Glass Door GOVERNMENT 2215 on the 4 TMUs ( 4 Tracker to be Moving Union ) , Look that and Stay that AMSWIPERGLOCK

Evidence , Analasys , Sensor , Interface , Distance and Time , Space and Time in Flying Object , Maneuvers , Motion Sequence Schemes , Probability analysis , Locking Targets Automatically , Calculate Direction Chance , Launch Weapon Trail , Synchronize Tracker and Target , Go to Match , STAR . Welcome and come on to lets go , we need to study and practical about Evidence to detect fast and undetected flying targets can use 4 TMUs : 1. Moving Radar Active Electronically - Scanned 2. Radio Frequency ( RF ) Jammer 3. Electro Optical Targeting POD ( EO - TGP ) 4. Infrared Search and Track ( IR_ST ) Of All That Efficency , Effectiveness , Quality , and the best electronic methods are Block chain that match and integrate . 10.24 : 20 February 2022
( Gen . Mac Tech ( EASID STAR ) ) Case in Bird free Moving to be integrated
Case at Rain , Cloud and Flash
locking objects is by means of electronic techniques that are efficient, effective in quality and understand: 1. The right space and time; 2. the courage to take and decide which electronic key is right; 3. Create and analyze scatter matrix patterns in real time; 4. Work patterns and goals for the future (time ahead), analysis of targets in space and time in the possible motion of space and time. take a look at the concepts of space and time and their possibilities in the EINSTEIN Protocol.
Electronic circuit compilation , Graphing , Elements , Location in one shoot monitoring . ___________________________________________
Crab Tactical :: Crab is a working animal, working by digging holes on the edge of the water and moving on the water's edge, working individually but still on the edge of the water. Wild but Smart Animal .
The intersection of High Tech and defense ____________________________________________ Today it is more important than ever to keep the supply of processing solutions across the sensor chain trusted and secure. Silicon Valley technology leaders, such as Intel, Xilinx and Nvidia, are making significant investments in our U.S. foundry (or fab) infrastructure to enable the security and supply of microelectronics. But that is only part of the story. It is also necessary to extend the trusted, secure supply chain to companies that adapt this microelectronics technology to the very specific requirements of the aerospace and defense (A&D) industry. It was May 24, 1844 when Samuel Morse transmitted his famous telegraph message “What hath God wrought” from Washington to Baltimore. Twenty years later, the U.S. Military Telegraph Corps had trained 1,200 operators and strung 4,000 miles of telegraph wire, which increased to over 15,000 miles by the end of the Civil War. While long-distance communication proved a significant advantage for the Union armies, it also opened the door for wiretapping. It was these early experiences that demonstrated the impact of surveillance and set the foundations of electronic warfare (EW). Over the last century, electronic warfare has had an increasing role in shaping the outcomes of conflicts across the globe; however, few people appreciate its significance and fewer still understand the technology. In this first post of our electronic warfare blog series, we present a brief history of the technology behind electronic warfare. Just as older cars are more intuitive to repair, the early EW systems are easier to understand. While wire-tapping was used during the Civil War, it wasn’t until the 20th Century that the field of electronic warfare began to mature. By the start of World War I, the need to for rapid communication over long distances became even more critical—leading to significant advances in the emerging field of signal intelligence. Immediately following the declaration of war, the British severed Germany’s undersea cables, forcing them to rely on telegraph and radio—both vulnerable to interception. To protect the content of the transmissions, Germany began expanding on its cryptography capabilities. During World War II, the use of the electromagnetic spectrum played an even larger role. It was quickly discovered that by flying bombing runs at night, the bomber crews were protected from anti-aircraft fire. However, locating targets at night was no easy feat. The Lorenz System Prior to the start of the war, Germany had invested in commercial RF systems to support blind landings at airports with reduced visibility. Called the Lorenz System, it operated by switching a signal between two antenna elements—one pointed slightly more towards the left and the other towards the right. Instead of equal pulse lengths on each antenna element, the switch sent the signal to the right element for a longer period of time—creating a long pulse on the right antenna and a short pulse on the left. As the plane approached the runway, the pilots would hear short tones if they were too far to the left and long tones if they were too far to the right. When they were properly aligned, they would receive both signals and hear a continuous tone. During the war, this system was modified to use large, high-directivity antennas to transmit long-range, narrow beams. Two systems were built such that the beams could be steered to intersect directly over the target. By following one beam, the pilots listened for the second signal to know when they were over the target and timed the release of bombs. This simple system drastically increased the effectiveness of the night raids over England and made the development of a system to counter the beams a top priority. Upon discovery of the German system, the British developed a method to interfere with the beams. Using high power transmitters, the British would broadcast the same long-tone pulse signal used by the German system. When this signal was superimposed on the same frequencies, the German aircraft would never hear the steady tone and would be unable to simply follow the beam to their target. Other methods of jamming the German beams involved the use of a BBC transmitter to broadcast a steady tone on the same frequency. This CW signal filled in the breaks between pulses rendering the German system unusable. As the British began their bombing campaigns over Germany, they too needed a method to locate targets at night. Their approach was a similar system that used two transmitters; each broadcasting a train of pulses. By measuring the time difference between received pulses, the pilots were able to navigate. However, this system was also susceptible to jamming. The Emergence of Radar In addition to the jamming of their navigational aids, the British bombers faced a new threat—German fighter pilots that were able to track the British planes using radar. One type of radar encountered by the British was a land-based early warning system that alerted the Germans to an approaching attack and also provided details such as the number of aircraft. Through intercepted radio communications and direct raids on radar installations, the British were able to learn the details of these systems—such as the operational frequencies—that enabled them to develop the technology to combat them. Instead of simply jamming the radar, the allies developed a system that would receive the radar signals, amplify them, and re-transmit them to the radar receiver. These additional signals were perceived by the radar system as reflections from additional aircraft. Employing this technology, a single aircraft could function as a decoy and pull resources away from other areas. However, these early systems were dependent on the radar frequency, and by using multiple radars with different frequencies, it became much more challenging to deceive them. To respond to the radars that operated over a wider band of frequencies, the Allies developed a jamming system that would transmit noise in various frequencies across the radar bands. This was effective until the Germans started using additional frequencies for the radar. Instead of jamming the radar itself, the allies discovered they could jam the communication signals between the radar operators and the fighter pilots. By sweeping a receiver over a broad frequency range, the British were able to determine the specific frequency that the Germans were using to communicate then transmit noise on that frequency. Continued Technology Development This back-and-forth cycle of inventing new ways to use the electromagnetic spectrum and developing the means to counter these new technologies continued through World War II and the Cold War. Even in the early days it was not sufficient to just have the best technology—in order to stay ahead, the technology required constant updates. Instead of deploying a system that could operate independently for a decade, EW systems required consistent modification to address emerging threats. Now, over a century and a half after that famous telegraph message, the invisible battle over control of the electromagnetic spectrum continues. The ability to communicate, track objects with radar, and to use GNSS to navigate have become critical to success on the battlefield. Additionally, a major advantage is achieved by disrupting an adversary’s ability to communicate, use radar and use GNSS. With today’s environment of rapid technology growth—such as compact GaN, high speed processing and AI—the battle for EW superiority is at its fasted pace yet. In the next post in this series on electronic warfare we provide an overview of radar technology before continuing on with posts on electronic support, electronic attack and electronic protection. Glass Door ( Transpose ) _ Startrek Tech ________________________________________
10 Star Trek Gadgets That Have Beamed Into Reality ___________________________________________ For the past half-century, Star Trek has offered fans a vision of the future by taking them on a deep voyage into the imagination to explore strange new worlds and seek out new life and civilizations, all while boldly going where no man or woman has gone before. If there’s one thing that we have learned from these televised trips, it’s that space is filled with so many fictional technological wonders, some of which may have influenced real-world scientific developments, discoveries, and inventions- But is it really Star Trek that has helped to make it so? Grab a cup of earl grey tea (hot!) from the replicator and join us on The Bridge as we assess the data from Starfleet’s most classified files to identify which technologies, gadgets, and services have beamed into existence after appearing in the science fiction franchise. 1. Communicators In the fictional universe of Star Trek, the crew of the Starship Enterprise use communicators to contact others, both onboard and off-board the ship. The handheld device allows crew members to contact other starships in orbit, which proves particularly useful when faced with challenging situations. For years, people have been using real-life communicators, otherwise known as cell phones, to regularly talk to people. Martin Cooper, the man credited with the invention of the first handheld cellular phone in the 1970s, has stated that his prototypes for the device were inspired by the original Star Trek tech. 2. Replicators In the Star Trek universe, the replicator has a number of functions and purposes, with some proving to be more popular than others. For instance, Captain Jean-Luc Picard frequently uses the machine to order a cup of “Tea, Earl Grey, Hot,” which is then produced from the ship’s reserves. These days, real-world replicators exist in the form of 3D printers, which build three-dimensional objects from a computer-aided design model. While they might be lacking the capability to deliver the perfect brew, these devices have a range of practical processes in order to manufacture complex objects. 3. Telepresence Crewmembers aboard the Starship Enterprise are able to access special telepresence technologies that allow one person to connect with another in a way that makes both parties feel as if they are present in the same location, even though they might in fact be separated by time and space. Since 1966, this invention has become an increasingly common and useful communication tool in real-world scenarios. In particular, Cisco’s telepresence system offers an authentic experience by mirroring the surroundings of multiple users in a videoconference to make it seem like they’re together. 4. Tricorders The tricorder is another important piece of equipment seen in the Star Trek original series. The multifunctional handheld device can be used to sensor scan an environment or an individual and record data for analysis. In particular, Dr. Leonard “Bones” McCoy often uses it to diagnose and cure patients. Here on Earth, a number of parallel products have been created to mimic the capabilities of the Star Trek device. For instance, the DNA Lab by QuantuMDx can scan a patient and deliver a diagnosis in 15 minutes, while NASA employs LOCAD to measure organisms at the International Space Station. 5. Universal Translators While Captain Kirk and his crew planet-hop aboard the Starship Enterprise, the space squad make contact with several different alien races and species, originating from a variety of strange new worlds, so the universal translator is an essential piece of kit to decode these foreign languages. Today, there are numerous technologies working to achieve the same outcome, though admittedly many have not reached Starfleet’s level quite yet. A lot of companies, however, are making significant progress in developing more advanced software that can translate complex sentences, especially via apps. 6. Hypospray Hypospray is one of the gadgets that is commonly used in Star Trek because Leonard “Bones” McCoy is a doctor, not a time-waster, and this medical device speeds up the process of administering medicine by injecting it through the skin using a non-invasive transport mechanism. In reality, jet injectors have been in existence since the 1960s, and though syringes have not yet been phased out, new technology is constantly being developed. MIT engineered a next-generation device that could make a trip to the doctor’s office a less painful experience in the not-too-distant future. 7. Tablet Computers Personal Access Display Devices, or PADDs, are shown to be in widespread use since at least the 22nd century in the Star Trek universe. The futuristic computer interface is used by space-faring organizations to punch in coordinates for star systems, as well as being a recreational tool aboard the ship. Over the years, we have witnessed real-world computers evolve into slim-line, touchscreen devices with significant computing power. Apple’s first-generation iPad helped to bring the device further into the mainstream in 2010. Now, many rely on tablet computers for both work and leisure activities. 8. Phasers In Star Trek, phased array pulsed energy projectiles, aka phasers, are available in a wide range of sizes and styles, ranging from handheld firearms to starship-mounted weapons, which can discharge beams, slice materials, trigger explosions, and, most famously, be set to stun. In the current world, comparable alternatives have been in use since the 1970s. Tasers and stun guns work on a similar principle to Captain Kirk’s primary weapon, however, these energy weapons have to be activated in close range to the target (the Borg or otherwise) to stop them in their tracks. 9. Tractor Beams The high-powered tractor beams in Star Trek are often used by starships and space stations to control and physically maneuver objects in deep space, which is particularly useful for towing ships in need of assistance to safety and pushing ships out of dangerous situations. In real life, optical tweezers operate in a comparable fashion to the graviton beams that commonly appear in the sci-fi genre, though on a much smaller scale. Rather than hauling ships from one location to another, these scientific instruments use laser beams of light to hold and move microscopic objects. 10. Warp Drive Warp Drive is one of the most iconic technologies used in Star Trek voyages. It works by generating warp fields to envelop the Starship Enterprise in a subspace bubble to distort the spacetime continuum and propel the vessel forward at a velocity that is faster than the speed of light. Interestingly, NASA has indicated that this completely fictional concept could actually be possible. In recent years, the scientific community has become increasingly excited about the concept of a warp propulsion system, which could provide the blueprints for ultrafast interplanetary travel in the future.
______________________________________________ Transporter and Electronic Glass Door
_____________________________________________

Minggu, 19 Desember 2021

AMNIMRJESLOW The Time Tunnel Division

Metaverse in the concept of space and time in real dimensions and derived dimensions, namely virtual dimensions. these two dimensions are real, but one is in the form of an analogue realm, the other is a digital realm in the form of a hologram or digital space and time where the time dimension can be set up for a certain metaverse period. the metaverse instrument and controlling as like as to begin It's a combination of multiple elements of technology, including virtual reality, augmented reality and video where users "live" within a digital universe. Metaverse is a virtual space that other users can create and explore without meeting in the same space. metaverse is a digital dimension that can be connected to the natural dimension or analogous to WIPER (Word Instruction Peripheral Energy Recovery) according to the reloaded metaverse. At some point the analog metaverse will be connected to the digital metaverse for multiple inputs and outputs: 1. Metaverse for Research, 2. Metaverse for Trade, 3. Metaverse for Warfare, 4. Metaverse for out space, 5. Metaverse for Time Tunnel.
Gen. Chief_Time1 ( ^+++++++++++* ) we are like in a garden which is a collection of slices between dignity and the living things in it, a garden is a place where we learn about nature . we need change management to improve science nature to digital science and then to improve stability adaptiv to be good time tunnel . The metaverse is indeed a virtual reality, but it’s not quite the same thing as what you’ve seen in science fiction blockbusters. Imagine a franchise like The Matrix, where the world is a digital simulation that everyone is connected to, and is so well-made that nearly no one knows that it’s not real. The metaverse is not quite like that, but it definitely has the potential to evolve into something fantastically immersive.
Metaverse at 21 st century __________________________ At its most basic, the metaverse is a virtual reality that allows people from all over the world to interact, both with each other and with the metaverse itself. Users are often allowed to obtain items that remain theirs between sessions, or even land within the metaverse. However, there are many ways to interpret that concept, and it has evolved greatly over the years. On the internet, we’re always interacting with something — be it a website, a game, or a chat program that connects us to our friends. The metaverse takes this one step further and puts the user in the middle of the action. This opens the door to stronger, more realistic experiences that simply browsing the web or watching a video fail to evoke very often, if ever.
Virtual reality (VR) and augmented reality (AR) are both concepts that are closely tied to the metaverse, but they are not one and the same. Instead of viewing them as different iterations of what is essentially the same thing, it’s good to view them as separate entities that supplement each other. VR and AR equipment allows the user to immerse themselves in a virtual world. In the case of VR, we are shown completely different surroundings. Be it a game or a movie, VR lets you interact with the changing world around you. AR, on the other hand, adds elements to your real surroundings and lets you interact with them in various ways. The difference lies in the purpose. You can play a VR or AR game at any given time without interacting with others, but the foundation of the metaverse, as envisioned by Meta and other companies, is human contact. In short, the metaverse is the playground for both of the above — a way for people to share a virtual universe together, be it for work, school, exercise, or simply for fun. The use of VR and AR tools will go a long way in expanding the metaverse and making it feel like a real experience as opposed to a video game with extra steps. However, the concept of the metaverse goes far beyond just VR and AR — it’s meant to bring people closer together in previously unheard of ways. This, in turn, also opens a lot of room for expansion. Considering that Meta now plans to heavily rely on both VR and AR in order to bring realism to the metaverse, buying Oculus seven years ago doesn’t feel like a random decision at all. It’s worth noting that Oculus Quest will soon be no more. Starting in 2022, the entire product line will be rebranded to Meta Quest, thus finally completing the acquisition and erasing the previous branding. In addition to Meta Quest, Andrew Bosworth, chief technology officer of Meta, announced that some Oculus products will be called Meta Horizon. According to Bosworth, this will be the branding that encompasses the entirety of the VR metaverse platform. Artificial intelligence will be used to listen to the user’s voice and animate their avatar accordingly, complete with matching lip movements. Switching to 3D meetings will also produce additional hand movements. metaverse for life may be to integrate the metaverse into the future of remote work. In an ideal metaverse, you forge your own destiny, and many of the common video game limitations are removed. However, the first step is the same for nearly every metaverse, game-related or not: You have to create your character. Becoming an avatar In the metaverse, users are given an avatar — a representation of themselves that they can tweak to look however they like. The way the avatar looks depends on the platform. It can be something very basic, but it can also be high quality, with a lot of room for customization. Users can strive to remain true to life, but they can also turn themselves into someone entirely different. The avatar, once created, is the user’s ticket to the metaverse — a virtual universe where the sky is the limit, provided one has the imagination to suspend reality for a little while. The avatar can move, speak, explore the area, and more. The limitations of the avatar lie entirely with the platform.
Some instances of the metaverse resemble a video game and let the user walk around using a keyboard and mouse. More advanced versions involve the use of virtual reality headsets and controls that truly immerse the user in the world by replicating their real-life movements in the metaverse. Different companies have different takes on the avatar-creating process. Through the use of mixed reality technology, the avatar will represent the user in a realistic way. In the future, this will involve a full range of facial expressions, body language, and backgrounds. Meta has big plans when it comes to avatar creation in its upcoming metaverse, Horizon Worlds. The avatars will be supported by VR and will replicate the user’s actions in real time. While this all sounds peachy, these avatars do not currently have legs — possibly to make the movement and travel easier to manage. However, Meta is also working on photorealistic Codec Avatars: Impressive-looking, ultrarealistic avatars that will be rendered in rea time along with the surrounding environment. Regardless of the platform, the ideal metaverse will let the user pick what they want to look like while retaining the realism of facial expressions and movements when supported by VR. What does the metaverse look like? Before answering this question, we need to distinguish “the metaverse” from “a metaverse.” There is no one singular metaverse that connects all the other universes into one cohesive whole, although they all involve the use of the internet to connect their users to one another. As such, every metaverse can look entirely different from the rest. The way a metaverse looks depends entirely on its creator. Some metaverses are sandbox-like, giving a lot of room for creation and not limiting the user a whole lot in terms of what they end up building. A metaverse can look like a classroom, a street, a fantasy forest, or the bottom of the ocean. In such a metaverse, real-world rules still mostly apply. You’re likely to see the sky, buildings, and nature, and most significantly, other people. The art style depends on the metaverse and can be cartoony, realistic, or anything in between. The bottom line is that a metaverse can look like a classroom, a street, a fantasy forest, or the bottom of the ocean. However, the most popular instances of it offer something that’s a mix of all of those things, all thanks to the freedom they provide their users. In an ideal world, the metaverse should connect each and every user to one another. Joining a public server should provide the ability to interact with everyone else who is connected at the time. The reality is often different. As certain metaverses grow more popular, it becomes impossible for the servers that host them to handle such huge traffic loads. This means that some developers create different layers that separate the users, effectively making the world a little smaller. There may come a time in the future when this can be avoided, but right now, the metaverse is often fragmented — not to mention the fact that people use different platforms, effectively choosing their preferred universe. As mentioned above, every company has a different take on the metaverse. The metaverse, as a concept, is not very easy to define, if only because of how limitless it seems to be. This means that its general purpose can be defined on a case-by-case basis — not just the company or group of people that create it, but also each individual user. The general purpose of the metaverse is to connect with others through a virtual, shared universe. Be it for work, self-improvement, or simply entertainment, the metaverse exists to breach the borders of reality and distance, connecting people from all over the globe. Allowing users, portrayed by their avatars, to interact with the world at large without giving them any clear goals allows for a lot of freedom of choice. This is also what Meta has built its big reveal on — the fact that in the metaverse, you can essentially do just about anything you want. Let’s take a look at some of the more common things you can do in the metaverse. Trade property Once you own something in the metaverse, it can be sold or traded to one of the other users. This adds an element of wealth and prestige to an otherwise detached world. Some lots are worth more than others, some items are rare while others are common — all of this adds up to the creation of an economy that applies to a particular metaverse. Typically, plots of land in the metaverse vary in size and location. As this is a virtual rendition of real life, it’s not a surprise that the real estate market is alive and well even in the metaverse. Contested plots, located closer to busy areas or simply made more desirable through some other luxury, can reach much higher prices than a tiny square of grass on the outskirts of town. Advertise Some metaverses attract not just regular users, but also companies. As the universe is shared by many, this opens up the opportunity to advertise. Simply buying land and displaying the logo of the company can be an effective way to pique or refresh interest. Companies are able to benefit from the metaverse in more ways than one. Organizing events, creating crossovers between franchises, and engaging with the user base is made easier in a seemingly limitless universe. Live and interact The above examples of what you can do in the metaverse are all technicalities when you compare them to the ideal metaverse — a place almost capable of replacing reality. We’re not quite there yet (and we won’t be for years), but the efforts of companies like Meta or VRChat are bringing us closer to this than we’ve ever been before. In a perfect metaverse, you are capable of interacting with every person around you. This goes beyond the text-based chat we’ve all seen in games such as Second Life or Habbo. Incorporating voice communication, VR headsets, and AR glasses allows for interaction on a whole new level. Whether it’s meeting your friends and going skydiving or forming a study group in a virtual library, the main concept of the metaverse will always revolve around human interaction — just not in person. Work and study From Meta to Microsoft, many companies place a lot of emphasis on the ability to work, cooperate, and study together in the metaverse. Microsoft is planning to use Mesh to bring realism to otherwise dull video meetings. Meta hopes to create virtual workspaces, giving remote workers a chance to spend time together in virtual reality during their workday. The metaverse can also be used for work in different ways, including simulating real-world tasks in virtual reality first. This can be utilized by engineers, programmers, designers, and many other professionals through metaverses such as Nvidia’s Omniverse. The connection between the metaverse, cryptocurrencies, and NFTs When speaking of the metaverse, it’s impossible not to mention cryptocurrencies. After all, some of the biggest instances of it are based on the blockchain — the decentralized framework that cryptocurrencies operate within. One such example is Decentraland, a sandbox-like metaverse that lets its users buy plots of land, explore other plots, and interact with each other. The entire economy is based around MANA — a cryptocurrency used specifically in Decentraland. What sets these metaverses apart from commercially owned universes is the fact that they rely on a decentralized network where your assets are your own and are not controlled by the owner(s) of the metaverse. Cryptocurrencies, and therefore also the universes that are set around them, are usually decentralized. This means that the currency, virtual land, or the whole metaverse is not owned by a single entity and cannot be taken down, sold, or otherwise destroyed on a whim. Decentralization involves contracts distributed to a network of users and a majority vote. Unless the majority of the network votes to take the metaverse down, it should, in theory, remain accessible to everyone. This is not the case in gaming metaverses, such as World of Warcraft, where your account continues to belong to the company in charge of the game. This means that all of your assets, such as your equipment or your characters, are ultimately not yours to control. This is where non-fungible tokens (NFTs) come in. NFTs can be anything from (frankly, rather ugly) 8-pixel avatars to breathtaking works of art. At their core, NFTs are a decentralized way to assign ownership to virtual goods. Anyone can download a photo and claim it as their own, but NFTs involve cryptocurrency and contracts that pin down ownership to one particular user. In the metaverse, this opens up a whole new level of economy that turns this fantastical concept into a way for people to make (or lose) real-world money. Users can buy virtual plots of land, avatars, or even a hat for their metaverse avatar — all through the use of cryptocurrency. Non-fungible tokens are independent of the metaverse, but they do play a part in the economy of certain universes, such as Decentraland or The Sandbox — an upcoming metaverse that is not yet live. The Sandbox sells plots of land in the form of NFTs, assigning full ownership to the person who buys them. The users can then visit that plot and interact with its contents. Just one glance at The Sandbox’s map shows that this form of NFTs caught the interest of not just cryptocurrency fanatics, but also dozens of companies that see it as a new advertising space to explore. The future of the metaverse : No one can deny that the concept of the metaverse has started to spread to previously uncharted lands. We’ve gone a long way from its humble beginnings in games such as Second Life, Habbo Hotel, or even the long-gone, long-forgotten Club Penguin. Meta hopes to hit the ground running with Horizon Worlds. It will take years for the metaverse to permeate our reality to the point of being as widely known and accessible as what Meta is hoping to achieve . Internet development for support Metaverse ========================================== Understanding Web 1.0, Web 2.0, and Web 3.0. Web 1.0 Web 1.0 is a web technology that was first used in world wide web applications. Some say web 1.0. as www itself which is widely used in personal websites. Some features or characteristics of web 1.0. is: 1. Is a static web page or only serves to display. 2. The page is still designed as pure html, which 'only' allows people to view without any interaction 3. Usually only provide some kind of online guest book but no intense interaction. 4. Still using forms sent via e-mail, so communication is usually only one way. Web 2.0 Web 2.0, is a term first coined by O'Reilly Media in 2003, and popularized at the first web 2.0 conference in 2004, referring to the perceived second generation of web-based services—such as social networking sites, wikis, software communication, and folksonomy—which emphasizes online collaboration and sharing among users. O'Reilly Media, in collaboration with MediaLive International, used the term as a title for a number of conference series, and since 2004 several developers and marketers have adopted the phrase. Although this term may seem to denote a new version of the web, it does not refer to an update to the technical specifications of the World Wide Web, but rather to how the system developer uses the web platform. Referring to Tim Oreilly, the term Web 2.0 is defined as follows: “Web 2.0 is a business revolution in the computer industry brought about by the movement to the internet as a platform, and an attempt to understand the rules for success on that platform. "Web 2.0 Principles" 1. Web as a platform 2. Data as the main controller 3. Network effects created by participatory architecture Innovations in system assembly as well as sites built by bringing together features from independent and distributed developers (a kind of open source tool development model) A lightweight business model, developed with a mix of content and service End of software release cycle (perpetual beta) Easy to use and adopt by users Web 2.0 has the advantage that it allows internet users to view the content of a website without having to visit the address of the site in question. Web 2.0 is an improvement from Web 1.0 where on Web 1.0 if a user wants to access a web, the user must first come to visit the web address he wants to access to be able to access the web, while in web 2.0 if the user wants to access a web, the user does not need to come. visit the web address that you want to access because the user can do this by clicking on the link that is already available for the web that the user wants to visit. For example, when a user is accessing a social networking site such as Facebook and the user wants to visit a particular web, the user does not need to open the web, but the user can directly click on the link on Facebook that is aimed at the web that the user wants to visit. In addition, web 2.0 is very helpful for users to use it because Web 2.0 has been run directly on the internet so that users can use it whenever and wherever the user is by just directly accessing the internet and using it. Web 2.0 ️ Web 3.0 Web 3.0 is the third generation of web-based internet services. The concept of Web 3.0 was first introduced in 2001, when Tim Berners-Lee, founder of the World Wide Web, wrote a scientific article describing Web 3.0 as a means for machines to read Web pages. This means that machines will have the same ability to read the Web as humans can today. Web 3.0 relates to the concept of the semantic Web, which allows web content to be enjoyed not only in the user's native language, but also in a format that is accessible to software agents. Some experts even named Web 3.0 as the Semantic Web itself. The uniqueness of Web 3.0 is the concept that humans can communicate with search engines. We can ask the Web to search for a specific data without bothering to search one by one in Web sites. Web 3.0 is also able to provide relevant information about the information we want to find, even without us asking. Web 3.0 consists of: 1. Semantic Web 2. Micro format 3 Search in user's language 4. Huge amount of data storage 5 .Machine learning Recommendation agency, which refers to the artificial intelligence of the Web Web 3.0 offers an efficient method of helping computers organize and draw conclusions from online data. Web 3.0 also allows Web features to become a means of data storage with an extraordinary large capacity. It is often said that web 3.0 leads to the semantic web. The term semantic web itself is a web development where web content is displayed not only in human language format (natural language), but also in a format that can be read and used by machines (software). As we know the website is intended to provide information. For example, when you want information on a book, you can search on certain search engines to get information about the book.
Sense of Metaverse __________________ “The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments,” defines venture capitalist Matthew Ball in the foreword of his outstanding nine-part essay on metaverse — The Metaverse Primer. Calling the metaverse a “quasi-successor state to the mobile internet”, he writes, “the Metaverse will not fundamentally replace the internet, but instead build upon and iteratively transform it.” Because Metaverse is constantly developing, there is every possibility that it will be much grander and more immersive by the time it becomes a reality than how it is being imagined today. the metaverse is presented as the ultimate evolution of the internet — a kind of virtual reality where any virtual interaction can have a direct impact on the real world too. The common thread in all is that the metaverse is a virtual reality wherein, depending on the advancement of the era, people will be able to do everything they do in real life. In the virtual space of the metaverse, everything people do in the real world is replicated. A VR headset, or any other wearable gadget specifically designed for the purpose, will function as a gateway into this world. More advanced metaverse platforms include Roblox and Fortnite. The former is particularly interesting. The fact is that the metaverse will become as real and common as the internet. As we can see, it is but a matter of time. “The metaverse isn’t going to be created by one company. It will be created by millions of developers each building out their part of it.” So, in other words, the metaverse is still being constructed brick by brick and everyone will have a hand in its creation. By the time metaverse becomes mainstream, maybe we will have systems like Starlink widely available to deliver high-speed data to the remotest corners of the planet. But even though the reach increases, metaverse might still struggle to entice people.
Metaverse Instrument and Control Circuit ________________________________________ building the metaverse and one of the strongest advocates for making sure it is open and interoperable, so users can own their data and bring it from one metaverse to another. Meta envisions a virtual world where digital avatars connect through work, travel or entertainment using VR headsets. Zuckerberg has been bullish on the metaverse, believing it could replace the internet as we know it The broader launch of Horizon Worlds is an important step for Facebook, which officially changed its name to Meta in October. The company adopted the new moniker, based on the sci-fi term metaverse, to describe its vision for working and playing in a virtual world. “The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects , subject , character , locate . Whether in virtual reality (VR), augmented reality (AR) or simply on a screen, the promise of the metaverse is to allow a greater overlap of our digital and physical lives in wealth, socialization, productivity, shopping and entertainment. entertainment must be proof to start Metaverse object . Simply put, the metaverse is the creation of a virtual universe where avatars of human beings will be present through technology. Such metaverses can be created by various companies for various reasons: entertainment, education and business Who will control the metaverse? Now that Facebook has shifted to Meta, who will pull the strings in what is being touted as the next iteration of the internet? Eight digital-marketing experts discuss whether decentralisation will come to fruition, or whether big tech players will keep their walled gardens up. There are two core components that define a 'true' metaverse, according to early investors in the concept: decentralisation and interoperability. Rather than having a centralised entity pulling the strings, decentralisation distributes control and decision-making to a network. In this open, permissionless environment, consumers are in command, able to build and determine the future of their experiences and be sovereign over their own identity and creations. Decentralisation is largely facilitated by blockchain technology, which allows users to track the provenance and ownership of digital assets on a virtual ledger and could power self-sovereign identities in the future. Storing data with users as opposed to with individual platforms allows for interoperability, by which users can easily teleport from one experience to the next using the same 'digital twin'. This is fundamentally different to the closed ecosystems of Web 2.0, where platforms own customer data and digital assets are platform or game-specific. Building 'walled gardens' around customer data has allowed tech platforms to build powerful advertising engines that have made them the trillion-dollar companies they are today. Are these giants prepared to rescind control over their strongest asset, customer data, in the so-called next iteration of the internet? Facebook cofounder Mark Zuckerberg, who is attempting to transition the social-media company into a "metaverse company", recently discussed the challenge of interoperability. In its third-quarter earnings call on October 25, he said the company needs to strike a balance between "enabling research and interoperability with blocking down data as much as possible". It's a concerning sentiment for those who want the metaverse to be built upon the consumer empowerment that decentralisation affords, especially given how much investment big tech is pouring into the metaverse. Facebook renamed its portfolio of companies to Meta last week to reflect its ambition to build the metaverse and has set aside "billions" to develop it. While Zuckerberg has said the company wants to support interoperability, its approach to hardware tells a different story. For example, the company's Oculus VR devices require a Facebook account to use. In the fourth part of Campaign Asia-Pacific's series delving into the metaverse and how brands should prepare for it, we ask eight experts from across the marketing universe for their thoughts on control and ownership in a virtual realm. Will it be dictated by big tech or by consumers? Will we end up with multiple versions of the metaverse rather than a single, utopian, decentralised world? How will platforms derive value and protect their most valuable assets? the current walled gardens to exercise all their energies on taking control of the metaverse, which would mean it wouldn’t be a metaverse at all. And I expect the innovators and entrepreneurs who make small, early moves either to resist this, or to sell out to the walled gardens. A lot will depend on whether those innovators are seeking a lucrative early sale, an even bigger payday down the line via their own IPO, or the creation of a new, decentralised internet 3.0. It could all come down to the personal life choices—shaped during the pandemic—of a handful of as-yet-unknown Lords of the Metaverse. Place your bets , place your investations , Place your real dream .
The evolution of a universal digital platform ____________________________________________ In technology, first-mover advantage is often significant. This is why BigTech and other online platforms are beginning to acquire software businesses to position themselves for the arrival of the Metaverse. They hope to be at the forefront of profound changes that the Metaverse will bring in relation to digital interactions between people, between businesses, and between them both. What is the Metaverse? The short answer is that it does not exist yet. At the moment it is vision for what the future will be like where personal and commercial life is conducted digitally in parallel with our lives in the physical world. Sounds too much like science fiction? For something that does not exist yet, the Metaverse is drawing a huge amount of attention and investment in the tech sector and beyond. Here we look at what the Metaverse is, what its potential is for disruptive change, and some of the key legal and regulatory issues future stakeholders may need to consider. The Metaverse is still just an idea. What is the idea and what are the basic building blocks for it? The Metaverse will be the outcome of the convergence of a range of nascent and extant digital and online technologies. It may start off as a focus for gaming, virtual reality, digital meeting spaces, digital assets (such as non-fungible tokens - see our briefing, Anatomy of an NFT), and perhaps even brain-to-machine interactions, but it will not end with that. Its scope and impact may expand when Artificial Intelligence is included, and when data from the physical world is brought in via the Internet of Things (integrating humans as well as businesses in ever more digital / physical co-existence). The Metaverse’s true potential lies, however, not in ever more convergence for its own sake, but in the outcome of that. It may evolve to become a universal digital platform for personal and commercial interactions – the platform replacing the current technology stack of the world wide web operating on top of the Internet - and become the source of the most valuable data about consumers available to the business world. the characteristics of the Metaverse? 1. Persistence: the Metaverse will exist regardless of time and place. 2. Synchronicity: participants of the Metaverse will be able to interact with one another and the digital world in real time, reacting to their virtual environment and each other just like they would in the physical world. 3. Availability: everyone will be able to log on simultaneously and there will be no cap on the number of participants. 4. Economy: participants - including businesses - will be able to supply goods and services in exchange for value recognised by others. That value may start off as (or include) the kind of value that video games players already use now (for example, fiat currency exchanged for virtual gold and in-game items). It may also include non-fungible tokens, cryptocurrency, and e-money, along with more traditional fiat currency. Such exchanges of value may depend upon technologies such as distributed ledger technologies and smart contracts, and technologies not even thought of yet. 5. Interoperability: the Metaverse will allow a participant to use his or her virtual items across different experiences on the Metaverse. For instance, a user experience may include cross-platform capability, allowing, say, a vehicle unlocked in a racing game to be used in a different adventure game, or an item of clothing purchased on the Metaverse to be “worn” and used in games, concerts, and any other virtual environments available. As the Metaverse moves beyond gaming, businesses participating may need to move beyond existing proprietary methods of shoring up their market positioning – controls over formats for the exchange of data and over verification of ID, for example, will need to change. Why would the Metaverse be any different from the World Wide Web? An Internet interaction today depends on a specific server communicating with another server or an end user device on an as-needed basis. The Internet simulates simultaneous interactions, but in reality they are different instances separated by fractions of seconds which, for the most part, we do not notice. The Metaverse will be more akin to simultaneous video calls in terms of user experience and interaction. In order words, simultaneous many-to-many communication. To achieve that, different infrastructure will be required, perhaps on a distributed or decentralised basis. While based on Internet infrastructure, there are already successful distributed/decentralised computing models (for example, distributed ledger technology and cloud computing) that might point to the future of what the infrastructure might be like. “The Metaverse will require countless new technologies, protocols, companies, innovations, and discoveries to work. And it won’t directly come into existence; there will be no clean ‘Before Metaverse’ and ‘After Metaverse’. Instead, it will slowly emerge over time as different products, services, and capabilities integrate and meld together.” The revolutionary nature of the Metaverse is likely to give rise to a range of complex legal and regulatory issues. We consider some of the key ones below. As time goes by, naturally enough, new ones will emerge. Data Participation in the Metaverse will involve the collection of unprecedented amounts and types of personal data. Today, smartphone apps and websites allow organisations to understand how individuals move around the web or navigate an app. Tomorrow, in the Metaverse, organisations will be able to collect information about individuals’ physiological responses, their movements and potentially even brainwave patterns, thereby gauging a much deeper understanding of their customers’ thought processes and behaviours. Users participating in the Metaverse will also be “logged in” for extended amounts of time. This will mean that patterns of behaviour will be continually monitored, enabling the Metaverse and the businesses (vendors of goods and services) participating in the Metaverse to understand how best to service the users in an incredibly targeted way. Therefore, in the Metaverse, a user will no longer need to proactively provide personal data by opening up their smartphone and accessing their webpage or app of choice. Instead, their data will be gathered in the background while they go about their virtual lives. This type of opportunity comes with great data protection responsibilities. Businesses developing, or participating in, the Metaverse will need to comply with data protection legislation when processing personal data in this new environment. The nature of the Metaverse raises a number of issues around how that compliance will be achieved in practice. Who is responsible for complying with applicable data protection law? In many jurisdictions, data protection laws place different obligations on entities depending on whether an entity determines the purpose and means of processing personal data (referred to as a “controller” under the EU General Data Protection Regulation (GDPR)) or just processes personal data on behalf of others (referred to as a “processor” under the GDPR). In the Metaverse, establishing which entity or entities have responsibility for determining how and why personal data will be processed, and who processes personal data on behalf of another, may not be easy. It will likely involve picking apart a tangled web of relationships, and there may be no obvious or clear answers – for example: Will there be one main administrator of the Metaverse who collects all personal data provided within it and determines how that personal data will be processed and shared? Or will multiple entities collect personal data through the Metaverse and each determine their own purposes for doing so? Either way, many questions arise, including: How should the different entities each display their own privacy notice to users? Or should this be done jointly? How and when should users’ consent be collected? Who is responsible if users’ personal data is stolen or misused while they are in the Metaverse? What data sharing arrangements need to be put in place and how will these be implemented? Biometric data Virtual reality headsets and glasses will likely be commonplace in the Metaverse (unless they are replaced by something more sophisticated in the meantime, such as direct electronic/brain interfaces). Such devices have the potential to collect a wide range of sensitive data about the wearer (for example, eye and body movements, physiological responses and even brainwave patterns, etc.) To the extent that this data is used by actors in the Metaverse to learn about the user or to make decisions about them, then it will be considered to be special category data under the GDPR. This means that extra conditions would need to be satisfied. Most importantly, the user would most likely need to give their explicit consent for each purpose for which the data is used. Let’s take the hungry woman example described above. If the woman was targeted with food adverts using gaze analysis technology, for this to be lawful she would have needed to have given her express permission. A general marketing consent would not suffice. Quite how this consent would be sought and given is a question that goes to the issue of whether the Metaverse can operate on a decentralised/distributed model, discussed below (see Decentralised / distributed models). Consent to marketing A key driver in the development of the Metaverse is its potential to enable new forms of marketing which are seamlessly integrated into the fabric of the Metaverse. Consent to marketing A key driver in the development of the Metaverse is its potential to enable new forms of marketing which are seamlessly integrated into the fabric of the Metaverse. For example, an individual heading to a store in the Metaverse might be shown deals on his/her favourite products in real time as he/she is browsing the shelves based on his/her previous behaviour. This is likely to constitute direct marketing under many countries’ data protection laws, which could require the consent of the Metaverse users. The precise nature of the obligations would likely depend on whether the brands themselves instigate the marketing and how the marketing is presented, including whether the presentation of marketing is more akin to online behavioural advertising or social media marketing (where a network of participants operate to present relevant advertising). However, in all cases, thought needs to be given as to how and when any required consent would be collected and, in particular, whether “real world” consent can be relied on by brands in the Metaverse and vice versa. Data sharing To enable interoperability, data collected by one entity in the Metaverse may have to flow seamlessly between different operators and even platforms. As interoperability improves and the consumers are allowed to move digital assets and avatars between platforms and across the Metaverse, software developers and brands will need to establish bilateral or multilateral data sharing agreements to improve the seamlessness of the consumer experience. This is not altogether different from the current environment in which databases are bought/sold, but there are conditions which must be met first. For example, one requirement under many data protection laws is that the receiving party’s privacy notice must be provided to an individual shortly after it receives the data to explain to the individual how their personal data will be processed. These conditions will become increasingly difficult to meet in the Metaverse, where data exchange is rapid and involves a multitude of participants. One solution to this might be for a central administrator of the Metaverse to give users a clear description of how their data will be used and (if necessary) the opportunity to give consent for various uses. However, data protection regulators have expressed distaste for this type of “catch-all”, bundled approach. These types of objections are likely to be more forceful in relation to the Metaverse, where the amount of data collected and complexity of data sharing networks is significantly greater in scale. Data export and localisation “Seamlessness” in the Metaverse demands that data crosses boundaries at speed and without friction. It will be challenging for organisations and/or central Metaverse administrators to manage this while the rules around data export and localisation are becoming increasingly strict. Many countries are also beginning to roll out “data localisation” laws which can impose onerous restrictions on data leaving the country in which it was collected (see our publication, Free Flow of Data). It would not be surprising to see developers and/or brands getting together to try and agree large, overarching data sharing/export agreements, although how feasible such initiatives might be remains to be seen. Responsibility for data breaches and cyber-attacks As with any online platform, the Metaverse will face the usual challenges of fending off cybersecurity incidents and data breaches. However, in the Metaverse these types of attacks may also take more ‘sci-fi’ type forms through deep fakes and hacked avatars. These types of incidents may therefore be harder to identify, verify and bring under control, and it may also be difficult to ascertain where responsibilities lie in respect of breach notification to users and data protection authorities, given the complex web of relationships that entangle the Metaverse. Decentralised / distributed models The discussion on data, above, underscores a number of competing tensions that will need to be addressed in the Metaverse: Participants will want a seamless experience in traversing the subsystems of the Metaverse. The platform technology itself may be decentralised. How will data sharing and a seamless user experience be possible in such circumstances if there is not central co-ordination by, say, an administrator? How will vendors who do not know each other and may have no commercial connection co-operate in relation to the exchange of data? Vendors will want to have customer “ownership”. To do that, they may want their own terms and conditions to which a participant subscribes. Will this mean that large areas of the Metaverse will be gated (greatly reducing the user experience)? At the moment, if we want access to the world wide web, we subscribe to an ISP’s terms and conditions for that access, but such terms and conditions do not prescribe the terms applicable to our access to particular websites on the world wide web. We are used to “partitioned” access to websites, governed by separate clickwrap or webwrap terms and conditions. As that approach does not lend itself to seamlessness, how will it be addressed in the Metaverse? Universal terms and conditions seem unlikely, so would technology provide the solution (for example, self-executing smart contracts)? Competition Law Competition law issues may arise as a consequence of both developer and participant conduct. Businesses developing Metaverse products and services on their own are unlikely to face antitrust concerns. However, the global and interoperable nature of the Metaverse will inevitably encourage multiple businesses to communicate and co-operate with each other in order to provide greater choice and a better experience to participants. Where they are competitors, communications or co-operation between Metaverse offerings could give rise to antitrust issues, which will need to be examined with caution. For example, while co-operation among competing Metaverse businesses to facilitate interoperability will most likely be viewed as pro-competitive, any sharing of competitively sensitive information (especially pricing) or agreeing on separate areas of focus and development could constitute serious antitrust infringements and lead to high fines. To mitigate this risk, Metaverse businesses will need to implement competition policies and training programmes, not only for their employees but, potentially, for certain Metaverse participants as well. Similar to other online gaming platforms, participants in the Metaverse could engage in conduct that would contravene antitrust laws in the real world. Where online products and services hold real-world value, real-world antitrust laws (such as the prohibition on cartels and joint boycotts) will also apply, which could have both civil and criminal consequences for those participating. Social Media Regulation Will social media regulation impact upon Metaverse stakeholders? It is difficult to speculate, so far in advance, on what the legal position will be in relation to the Metaverse when social media itself is not yet much regulated globally. BigTech, as incumbents, have a particular interest in the evolution of the Metaverse. Some commentators are calling for tougher regulation in order to make BigTech more accountable for content that appears on their platforms. many jurisdictions for BigTech and social media platforms. As the Metaverse emerges, key stakeholders will face the same kind of scrutiny in relation to the same kind of content. Intellectual property rights If you collaborate with others to generate intellectual property rights, who owns the created rights? Principles of joint authorship and co-ownership are complicated, and their application will become more so in complex virtual world scenarios where a community of stakeholders may have been involved. It is for these types of reasons that the European Commission is considering legal reforms to clarify the position on “co-generated” data arising out of new technologies, as well as in relation to machine-generated data. Metaverse stakeholders will need to navigate these kinds of issues when participating in the Metaverse. An IPR licence is a permission to do that which would otherwise be forbidden by intellectual property rights. The fast-moving world of the Metaverse may involve character “mash-ups” and the bringing together of intellectual property rights owned by separate stakeholders. Infringements caused by “use in combination” with other intellectual property rights is a typical carve-out in indemnities included in licences for intellectual property, but “use in combination” is precisely the kind of scenario that the Metaverse will bring about. Traditional risk allocations in IPR licences will need to be reviewed, as will scope of use provisions.
Any digital technology giants currently building the Metaverse, see the list below. 1. Facebook (Meta) By changing the Facebook name to Meta, they are ready to become Metaverse initiators. Facebook has also introduced its initial version, Horizon World. Now Facebook is preparing various technologies that can support Metaverse such as VR Messaging, Project Cambria and Horizon Marketplace. 2. Tencent; Tencent has platforms namely WeChat and Tencent QQ that will support the development of the Metaverse. Tencent QQ has e-commerce, movies, music, voice chat and gaming while WeChat has a mobile payment system.You need to know, Metaverse is a virtual world that connects with the real world. Users will be able to gather in the Metaverse in the form of an avatar by interacting like in the real world in real time.But the development of the Metaverse is certainly very difficult so it takes a long time. Now various parties are ready to bring the real world reality to the Metaverse. 3. Epic games ; Epic Games is one of the game companies preparing to build the Metaverse. Epic Games has announced to the public that it will fund $1 billion to build the Metaverse. 4. Microsoft; Microsoft started its steps in the Metaverse by introducing Mesh for Teams which can be used for presentations, meetings and chatting via avatars without having to meet in person. 5. Binance ; Binance is a company that can support financial transactions in the Metaverse such as the Binance NFT Marketplace which is useful for trading NFT assets from various blockchains. 6. Google; Google will start Metaverse by developing the Google Glass product. In 2021, Google has developed VR and AR devices through the Starline project. 6. Google : starline Project with Google glass.
earth 2 io ( virtual land earth )
Metaverse Poem by Agustinus Manguntam Siboro / WIPER / GLOCK ====================================== Life is ordinary when we think normally, spin around when we think there is a result because it waits for the time but it is useless if we are still living limited by space and time, life revolves but progresses we do it as a loop motion, every cloud has a name and height, every gate has a key, every restaurant has a different way of serving, every value and price has a time and time to make it meaningful, every food menu tastes different because the ingredients are also different, enjoyment only reaches the tongue and neck, after being processed in the stomach it feels the same, alive not only to eat but also to love and care, every love and affection always sacrifices effort and effort, working well but still paying attention to opportunities love and affection is a dream come true through the metaverse of love, work, and affection, time and effort can be optimal , make us live better , modern and dignified .
Metaverse- Connect with The Feel of Real In Virtual =========================================== Metaverse has stirred our imagination. It has invaded our digital space with people talking glibly about its possibilities and the privacy risks too. Before jumping to their comments, let’s decode the term. Metaverse is a portmanteau of Meta (beyond) and verse (universe). True to the term, Metaverse is a fictitious virtual space, yet a digital world when you can feel real and replicate many of the activities that you do in your real life. What if I claim you can hold office meetings, collaborate with teams, play, socialize, shop, walk in the snow or try new apparel, all in a virtual world? Yes, cutting-edge technology makes it possible. Augmented Reality (AR) glasses and Virtual Reality (VR) headsets can teleport you to a different world where you don’t share the same physical space but remain together with your folks, connected. Sci-Fi feel or Internet in Motion? Sounds like I have pulled out a scene from a Sci-Fi flick? Not really. Many have already dubbed Metaverse as the next frontier of the Internet or Internet coming to life. I can’t be sceptical when I see through the lens of possibilities. Let’s say, how does Metaverse change the way you use Facebook? Now, you are posting text messages and videos- it’s a passive, 2D communication. Metaverse elevates your social rendezvous to the next level where your digital avatars can socialize, play games, attend concerts, et al, real-time. For example, Mark Zuckerberg owned Facebook has rolled out Horizon- a place to explore, play and create with others in VR. What’s more, Facebook’s Reality Labs is researching on Codec lifelike avatars to make the Metaverse a more immersive experience. Making Metaverse Secure and Sustainable Metaverse as a platform has evolved only a bit beyond the concept stage. It may take a decade for the Metaverse to deliver the benefits. The swirling concerns are on privacy and security. There’s the possibility of the advertiser snooping on you in the virtual space. The digital space of every unique user in the Metaverse needs to be protected. Metaverse is going to define how we connect in the future on the internet. The solution is the democratization of this concept. A Metaverse platform free from the hegemony of a clutch of developers- one that can be created by you and me. A Metaverse for All-, yes, I’m sounding prophetic yet realistic. Let’s be ready to explore and romp in the Metaverse via our digital doppelganger.
Case Study NEURALINK ==================== Neuralink is a gadget that will be surgically inserted into the brain using robotics by neurosurgeons. In this procedure, a chipset called the link is implanted in the skull. It has a number of insulated wires connected from the electrodes that are used in the process . At first glance, neurallinks are similar but the implementation technique is different from Metaverse but also has derivative time. neuralink is an American neurotechnology company founded by Elon Musk and others, with the goal of developing an implanted brain–computer interface (BMI). Neuralink announced that they are designing a "sewing machine-like" device capable of implanting very thin thread (4 to 6 m wide) into the brain, demonstrating a system that reads information from laboratory mice via 1,500 electrodes, and anticipates starting experiments with humans in the next few years. year 2020 . SpaceX and Tesla founder Elon Musk has demonstrated a brain-computer interface chip made by another company he founded, Neuralink. This chip is expected to create a symbiosis between humans and artificial intelligence (AI). During a live broadcast hosted by neurotechnology startup Neuralink, Musk showed off a chip that had been implanted directly into a pig's brain. The original purpose of the Neuralink chip was in treating brain disorders and diseases in humans. After that, the chip is expected to be able to develop further, for example controlling cellphones to calling Tesla cars via telepathy. Musk defines the neural cord as the "digital layer over the cortex" which doesn't necessarily mean a mere extensive surgical insertion, but ideally an implant through a vein or artery. Musk explained that his long-term goal is to achieve a “symbiosis with artificial intelligence”, which he considers an existential threat to humanity if not controlled. As of 2017, some neuroprosthetics could interpret brain signals and allow disabled people to control their prosthetic arms and legs. Musk talked about the goal of linking the technology with implants that—rather than those that trigger movement—can connect at broadband speeds to a wide variety of software and other external gadgets. In 2020, Neuralink is headquartered in the Mission District of San Francisco, sharing the former Pioneer Trunk Factory building with Open AI, another company co-founded by Musk. Musk was the majority owner of Neuralink as of September 2018, but held no executive positions. The role of CEO is played by Jared Birchall, who has also been listed as CFO and president of Neuralink and as an executive of various other companies founded or co-founded with Musk, has been officially named. The "Neuralink" trademark was purchased from the previous owner in January 2017. As of August 2020, only two of the eight founding scientists remained with the company, according to an article by Stat News that reported that Neuralink had experienced "years of internal conflict, in which the rushed timelines have clashed with the fast pace of scientific development. slow and gradual. Neuralink is holding a live-streamed presentation at the California Academy of Sciences. The proposed future technology involves a module placed outside the head that wirelessly receives information from thin, flexible electrode threads embedded in the brain. The system can cover "as many as 3,072 electrodes per array distributed over 96" threads, each 4 to 6 m wide. The threads will be embedded by robotic equipment, with the aim of avoiding vascular damage. Currently, the electrodes are still too large to record the firing of individual neurons, so they can only record shooters a group of neurons. Neuralink representatives believe that this problem can be reduced by algorithms, but it is computationally expensive and does not produce precise results.
Time study and technique Uploading thoughts =========================================== Thought uploading, also known as whole-brain emulation (WBE), is a theoretical futuristic process of scanning the brain's physical structures accurately enough to create an emulation of mental states (including long-term memory and "self") and transfer or copy them to a computer in digital form. The computer will then run a simulation of the brain's information processing, so it will respond in essentially the same way as the brain and the real experience of having a conscious mind live. Substantial mainstream research in related fields is being carried out in mapping and simulating animal brains, development of faster supercomputers, virtual reality, brain-computer interfaces, connectivity, and information extraction from dynamically functioning brains. According to its proponents, many of the tools and ideas needed to achieve mind uploading already exist or are under active development; however, they will admit that the others, are still highly speculative, but say that they are still in the realm of engineering possibility. Thought uploading can potentially be done by one of two methods: copy-and-upload or copy-and-delete with gradual replacement of neurons (which can be thought of as gradual destructive uploading), until the original organic brain no longer exists and the computer program mimics the brain. take control of the body. In the case of the former method, mind uploading would be accomplished by scanning and mapping salient features of the biological brain, and then by storing and copying, the state of that information into a computer system or other computing device. The brain biologics may not survive the copying process or may be intentionally destroyed that long in some upload variants. The simulated mind can be in virtual reality or a simulated world, supported by an anatomical 3D body simulation model. Alternatively, the simulated mind could be on a computer inside (or connected to or remotely controlled) a robot (not necessarily humanoid) or a biological or cybernetic body. Among some futurists and within the section of the transhumanist movement, mind uploading is treated as a proposed life-extension technology. Some believe that uploading thoughts is mankind's best option today to preserve species identity, compared to cryonics. Another purpose of mind uploading is to provide a permanent backup to our "mind files", to enable interstellar space travel, and a means for human culture to survive global catastrophes by creating functional copies of human society in computing devices. Whole-brain emulation is discussed by some futurists as a "logical endpoint" of topical computational neuroscience and the field of neuroinformatics, both concerning brain simulation for medical research purposes. This is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and at least weak superintelligence. Another approach is seed AI, which will not be based on existing brains. Computer-based intelligence like uploads can think much faster than biological humans though not smarter. A large-scale upload community might, futurists argue, give rise to a technological singularity, meaning a sudden, time-constant drop in technological developments exponentially . The concept of mind uploading is based on this mechanistic view of the mind, and refutes the vitalist view of life and human consciousness. brain models and have defined a range of estimates of the amount of computational power required for partial and complete simulations. reference? Using this model, some predict that uploading could become possible within decades if trends like Moore's rule continue. In theory, if information and thought processes could be separated from the biological body, they would no longer be bound to the boundaries of the individual and the lifespan of that body. Furthermore, information within the brain can be partially or completely copied or transferred to one or more other substrates (including digital storage or other brains), thereby – from a purely mechanistic perspective – reducing or eliminating the “risk of death” of the information. Space exploration ================ An "uploaded astronaut" can be used instead of a "live" astronaut in human spaceflight, avoiding the dangers of zero gravity, vacuum and cosmic radiation to the human body. This would allow the use of smaller spacecraft, such as the proposed StarChip, and it would allow for nearly unlimited distances of interstellar travel. The focus of mind uploading, in terms of copying and transferring, is on data acquisition, rather than data maintenance in the brain. A series of approaches known as loosely coupled off-loading (LCOL) can be used in an attempt to characterize and copy the mental contents of the brain. The LCOL approach can utilize self-reports, life records, and videotapes that can be analyzed by artificial intelligence. A bottom-up approach can focus on the specific resolution and morphology of the neuron, the time of the neuron spike, the time at which the neuron generates an action potential response. Proponents of the mind upload points to Moore's rule to support the idea that the necessary computing power is expected to be available within a few decades. However, the actual computational requirements for running the uploaded human mind are extremely difficult to quantify, potentially making such an argument hypocritical. Regardless of the technique used to capture or recreate the functions of the human mind, the processing demands are likely to be enormous, due to the large number of neurons in the human brain along with the considerable complexity of each neuron. The computational capacity required is highly dependent on the scale level of the selected simulation model . When modeling and simulating the brain of a particular individual, a brain map or connectivity database showing the relationships between neurons should be extracted from the anatomical model of the brain. For a whole-brain simulation, this network map must show the connectivity of the entire nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of small samples of tissue from mouse brains including synaptic detail was made possible in 2010. However, if short-term memory and working memory involve prolonged or repetitive firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal states of the synapses and neurons may be difficult to extract. The uploaded mind can then sense memory loss of mental events and processes immediately prior to brain scan time. The complete brain map is estimated to occupy less than 2 x 10 16 bytes (20,000 TB) and will store the addresses of the connected neurons, the type of synapse, and the synapse "weight" for each of the 10 15 synapses of the brain. verification fails However, the biological complexities of actual brain function (e.g. epigenetic status of neurons, protein components with multiple functional states, etc.) may preclude accurate prediction of the volume of binary data required to properly represent a functioning human mind. let's review today : the small technique of temporary implants in human tissues like this: ============================================= Monitoring and control of infusion fluids is monitoring carried out by nurses or now using microcontroller equipment, both near and far controls to record results from patient data before and after performing infusion treatments. Monitoring will provide information about the status of measurements and evaluations that are completed repeatedly from time to time. Controlling the rate of fluid infusion is very important because it is associated with implants into nerve tissue.
There are many kinds of potential brain-machine interface (sometimes called a brain-computer interface) that will serve many different functions. But everyone working on BMIs is grappling with either one or both of these two questions: 1) How do I get the right information out of the brain? 2) How do I send the right information into the brain? The first is about capturing the brain’s output—it’s about recording what neurons are saying. The second is about inputting information into the brain’s natural flow or altering that natural flow in some other way—it’s about stimulating neurons. These two things are happening naturally in your brain all the time. Right now, your eyes are making a specific set of horizontal movements that allow you to read this sentence. That’s the brain’s neurons outputting information to a machine (your eyes) and the machine receiving the command and responding. And as your eyes move in just the right way, the photons from the screen are entering your retinas and stimulating neurons in the occipital lobe of your cortex in a way that allows the image of the words to enter your mind’s eye. That image then stimulates neurons in another part of your brain that allows you to process the information embedded in the image and absorb the sentence’s meaning. Inputting and outputting information is what the brain’s neurons do. All the BMI industry wants to do is get in on the action. At first, this seems like maybe not that difficult a task? The brain is just a jello ball, right? And the cortex—the part of the brain in which we want to do most of our recording and stimulating—is just a napkin, located conveniently right on the outside of the brain where it can be easily accessed. Inside the cortex are around 20 billion firing neurons—20 billion oozy little transistors that if we can just learn to work with, will give us an entirely new level of control over our life, our health, and the world. Can’t we figure that out? Neurons are small, but we know how to split an atom. A neuron’s diameter is about 100,000 times as large as an atom’s—if an atom were a marble, a neuron would be a kilometer across—so we should probably be able to handle the smallness. Right? So what’s the issue here? Well on one hand, there’s something to that line of thinking, in that because of those facts, this is an industry where immense progress can happen. We can do this. But only when you understand what actually goes on in the brain do you realize why this is probably the hardest human endeavor in the world. So before we talk about BMIs themselves, we need to take a closer look at what the people trying to make BMIs are dealing with here. I find that the best way to illustrate things is to scale the brain up by exactly 1,000X and look at what’s going on.
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Wish you get all the success this year. Hope that joy and success follow you in every sector of life. May the new year bless you with health, wealth, and happiness. May the new year bless you with health, wealth, and happiness. In our perfect ways. In the ways we are beautiful. In the ways we are human. We are here. Happy New Year’s. Let’s make it ours. The beginning is the most important part of the work . Learn from yesterday, live for today, hope for tomorrow. An optimist stays up until midnight to see the New Year in. A pessimist stays up to make sure the old year leaves . Happy New Year 2022!
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&