Review

(Cross-posted from my Substack, sorry if I'm overstepping. The original article have images, I'm not sure how to properly add them here).

 

TLDR: RoboNet is a proposed new internet protocol for AI content / systems. The approach aims to remove political/private sector "ownership" of ad-hoc regulations/policies targeted at AI by putting it all in a bucket that lives at technical domain. The protocol fixes old problems and brings new challenges to all parts, but it aims to vastly simplify the regulation and end user consumption of today and tomorrow AI systems and media. 

 

1. Introduction

RoboNet is a proposed new protocol (such as the HTTP or what Signal is to WhatsApp) for all AI generated content and AI systems that impersonates human behavior, with a focus on transparency and algorithmic accountability. RoboNet ingenious potential to enable a set of transformative benefits for the Internet and the people using it, is a response for a challenge of our times, and it aims to update the Internet itself.

The year of 2023 sets the bar for the last days of a World Wide Web (www) where the majority of the content inside it is still mostly human made content, and its digital counterparts arrive with many challenges regarding its cultural and intellectual integrity, authenticity and operational accountability. RoboNet explores a novel approach to address the pervasive issues of misinformation, disinformation, automated manipulation and content provenance conflicts that arrive with recent advances of AI generated media and systems, before the www becomes a place where the distinction between human and synthetically made content gets blurred forever.

Figure 1: Internet content and AI alignment tendencies

The RoboNet Artificial Media Protocol (RAMP) integrates seamlessly with existing digital technologies, ensuring compatibility and ease of adoption across platforms, with minimal adaptable costs to Internet Service Providers such as Amazon and Operational System (OS) companies (such as Microsoft/Apple/etc). The protocol enables a valuable set of solutions for synthetic content regulations, standard developments, and friendlier stakeholder collaborations, as RoboNet fosters a common collaborative ecosystem that goes beyond artificial content classification, targeted interventions, and the reactive responses to AI-generated content manipulations at all scales. RoboNet aims to technically address problems that are at the core of the international misalignment regarding AI regulations and the lack of a common ground to address the misuse of AI automation tools and other AI technologies of today and tomorrow.

As an Internet protocol, it is important to notice RoboNet gets exposed the same at both enterprise level and all everyday Internet edge devices. While no less relevant, enterprise perspectives demand a more robust industry oriented introduction, so this article limits itself on the exploration of core RoboNet RAMP concepts and its advantages over the HTTP alone, with the particular focus on end users dynamics and its regulatory instrumental features.

For the everyday Internet user, RoboNet brings a single FILTER button for all devices, as illustrated as a toggle button with 3 states on Figure 2:

Figure 2 - RoboNet Filter example as an iOS like toggle button.

  • HTTP/ON [only human content Internet]
  • RAMP1/Mixed [mix /composite human /AI content] [IPTC 1]
  • RAMP2/OFF [human, mixed and 100% digital content] [IPTC 1 - 2]

The middle /default option is equivalent to today's 2023 Internet, with no retroactive changes and Mixed AI generated content coming from RAMP1 services is served as default on this choice. The new features (ON and OFF states) are set to be exposed at Operational System (OS) level (here Android/iOS/Windows/Mac and Linux for most cases). The 3 states are technically referenced as HTTP / RAMP1 and RAMP2 states, here displayed in this order.

Given the OS level toggle, RoboNet protocol enables a gateway like response for how the OS handles RAMP traffic. Similar to how any phone Airplane Mode toggle button works, RoboNet exposes an Internet where people have the ability to make informed decisions on the type of content they want to consume online. RoboNet creates a clear holistic distinction for how artificially generated content is served and consumed on the Internet.

By enabling the RoboNet protocol, countries, regulators, policymakers and other Standards Developing Organisations (SDOs) can share a common technical language that addresses several of the known and yet unknown capabilities of AI technologies. And as RoboNet insulates the HTTP protocol from a series of never ending and highly diffuse ad-hoc interventions targeted at the misuse of AI technologies, the RoboNet protocol acts as a proper catalyst for advancements in AI services regulation and standard developments, in an intelligent and compartmentalized fashion.

2. AI-generated content: A perspective

Given the recent developments of GenAI systems such as Stable Diffusion and ChatGPT, a remarkable amount of all our online systems are now exposed to the misuse of similar and new smart technologies that empower those using these with malicious intent with capabilities yet to be properly understood. And while the RoboNet does not aim to provide an ultimate solution to all the old and new AI problems, its sharp focus on artificial media /services delivery at OS level, enables RoboNet to interface with issues that arrive with this type of content /behavior in digital services before they impact people, institutions, organizations and ecosystems dynamics.

The following list samples two key implications of AI content /systems that arrive at distinct, non chronological moments of AI interactions lifecycle: the type* of issues and how* these are delivered;

The type of issues enabled by AI-generated media:

  • content that lacks watermarking or basic authorship consensus;
  • training data IP, copyright, fair use and licensing matters;
  • collection methods, source biases and information trustfulness;
  • alignment, control and misuse;

The how issues, as AI-generated content:

  • can be used to attack, corrupt and misuse legacy Internet services;
  • can impersonate biometrics such as voice or face and other credentials;
  • can automate misuse of data, techniques and credentials at scale;
  • can influence elections, media and social dynamics;

Figure 3: Left - Internet as is today with no prototol distinction for AI services. Right: RAMP claims AI services leaving minimal AI overlap with HTTP services.

RoboNet RAMP aims to be both technologically and semantically inserted at the digital services stack as a proper delivery mechanism that catalyzes how AI services and content are exposed with the “legacy” HTTP, providing a clear legal instrument that allows targeted mitigation responses to address the type and how issues of AI systems / media as listed above.

3. The non-RoboNet approach

Recent American, Canadian and European developments on AI policymaking tackle major societal wide AI harms by using Risk Analysis principles as an holistic instrument in a very valid fashion, but treating AI traffic/apps/services/content as same as any other HTTP traffic, comes with heavy societal-scale risks, that even with our ever improving regulations, means governments and citizens have to continue to rely on private sector companies and third parties to classify and moderate AI /automated content, demanding new sub-systems, procedures and institutional policies to tackle misuse at subsequent key moments of digital services lifecycle, in a particularly diffuse approach, where effective common governance mechanisms at global scale are virtually non existent and even live a moment of infighting for international influence in a standard setting warzone, where private sector and state-led AI governance initiatives aiming for "AI leadership", compete with non-state-led initiatives (here most notably OECD) and even SDOs such as ISO and the IEEE.

Laws, regulations and SMEs of all fields ask for content providers and platforms on the Internet to label synthetic media, to identify chatbots, to address disinformation posted /promoted by automated systems, create content filters for influence-as-a-service entities, address manipulative algorithmic systems, curb the use of deep fakes and AI-enabled disinformation operations. The ever growing list imposes costs at all operational levels for service providers and their counterparts /partners, with no industry wide established QoS standards, largely because same as the very problem list is new, the way societies react to it is pretty much the same age, in a domain where even actual experts are rare animals, and forget lawmakers and politicians that can speak the field lingo with basic technical proficiency, it is always a painful watch.

But RoboNet does not come without a policymaker ask either: The use of the RAMP protocol should be a standard enforced by law /regulation, which by consequence, penalizes other protocols hosting /serving RAMP content /services. That's it.

It's an update to the Internet that without binding compliance is basically toothless. So what exactly is RoboNet/RAMP then?

4. RoboNet Artificial Media Protocol

While the RoboNet Artificial Media Protocol (RAMP) final specifications may have stronger definitions on its interaction with other protocols, in essence the RAMP should provide artificial content without many distinctions from how the well known HTTP/S protocol semantics / definitions already work today. In its most simplistic fashion, accessing RAMP content is as simple as using its specific URI scheme:

Figure 4 - RAMP URI examples.

I like to believe that how a new Internet protocol standard comes to this world isn't exactly anyone’s forte so while I can't pretend to know what I am talking about here, apparently we need these guys to have a few meetings, some working groups and somehow a protocol like RAMP is born! For simplicity sake, this article assumes RoboNet design can be bootstrapped by simply cloning the HTTP protocol specifications (let’s suppose we can just clone /rename it to ramp) in order to accelerate RoboNet Protocol time to market while still giving regulators /SDOs a reasonable time to discuss the initial RoboNet AMP specifications over its templates. The long version is that there are many functional discussions to take place over the protocol topic, engineering level discussions, where a custom and more intelligently designed protocol development may be the most robust choice in the real world, but in essence the RoboNet does not aim to offer the Internet or apps any new functionality or service. As is today, RAMP just claims all artificially generated media and some AI systems that generate those under its ramp:// umbrella as frictionless as possible, with no new problems or surprises, the idea is to have a smooth deployment roadmap, with 100% no end user action required other than perhaps common OS /app updates.

The RoboNet Protocol consists on 5 Core Principles:

  • RAMP hosting services host Mixed and 100% AI generated content only;
  • RAMP serves all AI systems /services that provide human like behavior;
  • RAMP service modeling should promote academic data cooperatives /trusts;
  • RAMP content should not be hosted by providers serving it via other protocols;
  • RAMP compliance should be a legal /regulatory binding requirement;

5. Content Provenance and Verification

Technical communications between web services, APIs and even other protocols take place at the background of millions of apps everyday and users hardly notice any differences on how these perform their network requests, but as HTTP providers and services should then no longer host /serve RAMP content, hard links to these are exposed at the regular WWW as hotlinks to the RAMP content. On its simplest form, HTTP and RAMP differences are as illustrated on Figure 5:

Figure 5 - RAMP hotlink example: ramp:// is used instead of http:// - apps and websites should offer RAMP with similar HTTP known behaviors

Guided by RoboNet core principles, RAMP updates the Internet so that the very Internet keeps up with the times, as emerging technologies don’t ask for permission to explore the pitfalls of legacy governance models in all places digital. By enforcing RAMP, over time HTTP ends up serving and hosting only "human made" content again, given that the RoboNet promotes a clear distinction for how today’s AI systems offer society their services, a distinction that is likely to eventually even become an unavoidable necessity.

The benefits for bootstrapping this architecture are many, e.g.: artists can then have 2 proprietary domains of art to explore their craft, the HTTP and the RAMP1 (for mixed authorship) all while 100% AI art is safely isolated in the RAMP2 access tier. So at the same time that AI generated content provenance and authorship gets isolated for both end users and regulators, all online art and cultural content can be identified in a clear way without relying on vendor-locked watermarking, content scanning, and other privacy threatening techniques.

With any content generated by AI systems having to "live" exclusively inside RAMP services, all types of artificial media metadata may not only provide ideal provenance and authorship credentials, they can now also promote new business models where users can safely subscribe to 100% AI services, a brand new business opportunity that thanks to the RoboNet can then rely on resilient AI lifecycle integrations that can in fact deliver this proposition. This means that countries and regulators would still be free to decide how to address domestic privacy laws, electronic identities and other digital services regulations in a way that doesn't have to ask for AI content /media classification because of the treat of new AI systems, thanks to the RAMP consistent transparency. This same transparency allows HTTP services to detect the upload of files with RAMP headers /provenance metadata and deny hosting them using the wrong host type /protocol, or in other words, end users might not even have any noticeable impact in most of their daily online lives, as RAMP compliance can be automated at technical level, such as at load balancers.

New standards for Generative AI media metadata are still largely unkown by the public, but Google, Midjourney and Microsoft already adopted IPTC /C2PA specifications for media generated by AI and the RoboNet protocol encapsulates these with its standardized provenance with zero friction, offering even more compliance mechanisms for digital assets consumption /training by AI systems, such as those specified by C2PA metadata, here sampled in CBOR diagnostic format:

// Assertion for specifying whether the associated asset and its data
may be used for training an AI/ML model or mined for its data (or both).

{
  "entries":
	"c2pa.ai_training" : {
		"use" : "allowed"
	},
	"c2pa.ai_generative_training" : {
		"use" : "notAllowed"
	},
	"c2pa.data_mining" : {
		"use" : "constrained",
		"constraint_info" : "may only be mined on days whose names end in 'y'"
	}
}

With clear provenance distinction, trust can be broadcasted at protocol level so that Internet users would no longer need to rely on local laws or corporate policies to trust if a content is authentically human made nor expose people to unfair non human competition /practices. The RAMP provides an unique opportunity for AI service providers to benchmark and build trust on AI systems, as its robust framework calls for responsible and ethical practices, where Internet users should then have the real chance of no longer have to get exposed to artificial content or artificial systems by mistake or malicious intent.

From Day 0, RoboNet enables Social Media Platforms (like Facebook, Instagram, TikTok, Twitter,) and similar firms to instantly cease to have issues related with automated account creation and automated content dissemination. As side effect, all current methodologies for influence operations, disinformation campaigns, fake news and other viral automation software would need to get reinvented, as RAMP exposes their RAMP2 tier traffic nature for both RAMP and HTTP handshake filters.

Figure 6 - RAMP exposes two new content layers where AI traffic is exposed for what it is, not for what it isn’t.

As specified in Chapter 4, the RoboNet protocol requirement 2 assumes all automated systems that provide "human like" behavior needs then to be hosted /served using the RAMP protocol. It doesn't mean today automation services cease to exist /operate, but it means their traffic is then identified as such at protocol level, so that both service providers and end customers shall interact with such services as a) Defined by RoboNet specifications b) Defined by sovereign /legal entities c) Defined at corporate /parent policy d) Defined as end user setting, in this order. This architecture no longer asks Social Media companies to "detect" artificial content by themselves, in fact, they don't even have to deal with these automated requests arriving via ramp:// in the first place at all, if they don't want so. RoboNet claims the decision making concentration of power from this entire set of companies and tell these to develop strategies that accommodate artificial content /systems instead, looking for use and distribution of mixed and 100% artificial content and systems on their services as strategy, granted adequate compliance to the highly enforceable provenance mechanisms introduced by the RoboNet Protocol.

Over time, Internet traffic systems and other digital bottlenecks should also be able identify and map RAMP traffic and properly expose all traffic that belongs to the RoboNet protocol out from the WWW, so that traffic equilibrium and by consequence optimal anomaly detection should naturally emerge after an adaptation period, where the outcome is an Internet where its structural protocols allows for AI traffic classification, a feature that should over time enable mitigation of malicious attacks and digital crimes such as new types of computer viruses and even other smart /AI treats that are yet to be created. Provenance trust over automated content and systems becomes a protocol feature, not a corporate, geographical or political consequence, and out of the box with zero friction on how existing regulations already classify, assess and certify AI tools, services and providers.

By limiting artificial media at protocol level, parents and corporate customers not only have a safe choice for denying some or all of artificial media content, the approach also enables a common standard for apps and service providers to safely offer microfilters, a subset of content type moderation that offers same capabilities today's providers already have in the form of proprietary APIs or in-house solutions, but that share no common procedures /standards with other providers, something RoboNet can guarantee as design principle. Examples include things like device level PG-13 compliance for AI content or Federated compliance for the access of AI features in corporate environments. The protocol enforces a friendlier environment for good practices between Platforms and Internet companies.

But where RoboNet unexpectedly excels is that by being a distinct protocol for AI systems with OS level compliance requirements, even offline AI systems would have to abide to the protocol settings, which safeguards that offline devices still abides to corporate, parents and user chosen policies regarding AI content, a paramount feature that today’s HTTP protocol was never designed to offer.

Finally, there likely are even more complex and possibly unforeseen scenarios involving the misuse of AI in offline modes, particularly given that today's AI systems and apps online traffic data have no distinction from any other type of app. By offering this enforceable* and distinct protocol, phones and computers OS settings can expose RoboNet extra security layer for any apps, that without exposing user privacy or relying on online services, may in the future even enable authorities to use the RAMP to define geofencing policies that may completely disable the use of these AI systems in specific geographic areas, such as schools and prisons, even for offline sideloaded apps, a capability no technology or regulation currently available can offer or much less enforce. RoboNet introduces the technical conditions that may accomplish all of this, with virtually zero technological complexity or added costs to offer this functionality, streamlining a novel human centered security feature that while still imperfect, defines a method and common language for the discussion of these scenarios that simply does not affect Internet users on the WWW on its interventions, a remarkably clean instrument for AI regulators.

*Enforcing compliance won't address malicious users from compiling AI systems that bypass or disguise RAMP traffic as HTTP /similar traffic, or even more advanced treats that are specifically engineered to bypass compliance restrictions, but having an entire protocol for automated content and AI data traffic such as RAMP in place, may offer an unique insight instrument for its traffic behavioral fingerprinting for the training of algorithms and Firewalls /similar tools that can be exposed at OS level to enforce stronger assurances against the misuse of AI technologies even under the most challenging scenarios, in an automated fashion.

6. Regulation and Standards

RAMP understands any automated system that performs a human job or that impersonates online human behavior to be its responsibility, leaving the HTTP Internet for human taken pictures, human written books and songs, human created accounts, human written posts on social media, RoboNet claims all artificial media generators and performers to be served as:

Figure 7 - RAMP service URI address example.

This does not change today's Internet nor how Google/links work, people would still go to https://chat.openai.com/ but technically, all ChatGPT responses would arrive via ramps://chat.openai.com/ protocol at the background, and its users would be none the wiser about it, that's how transparent RoboNet Protocol is in the real world for apps and service providers.

RoboNet aims to create a technical and operational instrument where online automated services can no longer hide their tracks whenever navigating the WWW Internet by being more of common HTTP traffic noise, so that when a RAMP2 bot scans a blog hosted on the WWW, its protocol provenance promptly identifies the bot as an automated agent without the need to rely on proprietary validation processes, as the protocol itself handshakes other protocols on the Internet as what it is, not as anything else.

Businesses and regulators can find on the RAMP a common technical ground to discuss how Large Language Models training crawlers scan the HTTP Internet for Data, or how companies that operate using artificially generated social media accounts interface with the HTTP Internet, RoboNet removes the burden of mass automated methods employed by companies to bypass, abuse and misuse HTTP "public" data from everyone's online lives, while at the same time that it enables a common technical framework for companies to expand on more adequate and standardized B2B practices.

RAMP can address common risks that arrive with the misuse of new AI systems built with no industry wide standards, as by having no common ground for targeting AI technologies, regulators impose content classification and detection on service providers technological choices, budgets and incentives for compliance. RoboNet eliminates this step and the usual implications of its misuse such as biases and behavioral doctoring, provenance is no longer a matter of detection, but a mechanism that can be enforced, audited and classified by domestic and international regulations and policies.

By giving end users and enterprise level operators a single toggle choice for both AI services and AI /artificial content, the RoboNet empowers the users and not corporations and regulators for what the Internet is and how it is consumed. And its features don't end at social and commercial endeavors.

By empowering users to see how elections are online without the noise of automated /generated content and accounts is a remarkable feature of the RoboNet Protocol, and to the best of my knowledge no other solution offers this experience. And while this doesn't mean that users will not want to see mixed or artificial content during an election, RoboNet raises the bar for the practice, as companies will have to offer better incentives for most their users to opt-in for automated accounts and artificial media at these times on their apps. Some would say elections without automated accounts promoting disinformation or misinformation is a remarkable RoboNet feature, but I personally like to think that having 3 separated content streams to offer users, enable new businesses venues for candidates to explore these two novel streams with their electorate. The opportunities are endless and users retain the control to engage with a more streamlined and human online election experience.

So a key impact for Service Providers such as Social Media Platforms, is in fact an opportunity to develop and deploy these 3 new strategies for the distinct types of content delivery algorithms they should then have available for their users based on users preferences. I strongly believe that in order to bring customers to options "ON" and "OFF" of the RoboNet filter, companies will have to innovate on their business models and practices, while Internet users may then experience this refreshed, less toxic and less geared by poor incentives Internet.

Figure 8 - RAMP protocol offers clear traffic distinction between HTTP and RAMP services/data while still providing the foundation for future use cases.

As an Internet protocol, RoboNet is also less exposed to domestic political frictions: As it is not a law, its definitions can remain technical in nature, removing the burden of asking for Social Media Platforms and Internet Service Providers to take a side on political stands regarding computational and artificial advantages of their home jurisdictions political mood. Law enforces RAMP use or not, it is a desirable political response from regulators and SDOs, not a technical requirement for RAMP to be deployed nor adopted.

Also, to the best of my knowledge AI regulations, even the most recent ones such as the EU AI Act, are protocol agnostic, so RoboNet still inherits legal provisions as fluidly as possible, but unlike a law, RoboNet doesn't need to be sanctioned or not by Russia to work in Russian browsers, it is a software update and if countries decide to keep on regulating AI the same way as they already are, nothing needs to be changed. RoboNet doesn't change the rules of the game, it only aims to promote frictionless technical /legal characterizations of AI systems and AI generated media, but by doing so, it comes with valuable politico-regulatory benefits.

7. Functional Assumptions

While brainstorming more assumptions may take time and resources, RoboNet conceptual aspirations for its architecture imply a number of behaviors that can be dictated either by design at protocol specification level, or at application level development on how other protocols interact with the RoboNet Protocol (ie. HTTP and RAMP error messages). What follows is a list of possible side effects for the RAMP Protocol, in no particular order, that are interesting to list /explore:

  • Enables streamlined AI training environments;
  • Optimal control mechanisms;
  • Constitution for the RAMP protocol that rules its behavior;
  • Custom rules of engagement with HTTP WWW /other protocols;
  • Bottleneck for automated attacks on both RAMP and HTTP interaction ends;
  • Better Data Poisoning /Adversarial attacks protections;

Topics vary from interesting advantages of not having humans doing things on the RoboNet itself, to having novel methods for detecting automated attacks on the WWW Internet and on the RoboNet itself, where the protocol can act as an early attack detection system /layer; RoboNet can also abide distinct control mechanisms for the content it serves, enabling a proper instrument that focus on reducing the risks associated with uncontrolled automation and malicious use of AI content /systems that is simple for regulators and companies to target.

8. Challenges and Limitations

The RoboNet Protocol can enable a more balanced and human digital life for both ours and future Internet user generations, but as it offers a common technical environment for how AI data can /should be shared with and between AI and legacy systems, it does not magically replaces Intellectual Property rights discussions, RAMP is merely an encompassing asset that simplifies AI regulatory discussions by providing common denominator for the matters of attribution, privacy and legal dispositions such as ownership or licensing related to AI systems and byproducts at the application layer of the many digital services lifecycles.

By segmenting AI generated content and automated systems out from the HTTP protocol, the RAMP protocol can single-handed address the ever-growing misuse of digital services coming from automated accounts posting and reposting content that uses mass algorithmic manipulation and the ever improving impersonation capabilities of AI on the Internet as we know today, but it does so by providing a solid provenance trail, not by addressing these issues directly. It would be still be providers job to address these, but same as programing websites with a standard language such as HTML promotes universal compatibility, RAMP does the same at application layer traffic, which allows ALL apps, websites and even APIs to share common security /compliance principles regarding AI.

The malpractices of election manipulations with fake social media accounts, fake stories getting posted by automated systems, Internet "public" data getting scrapped by firms whose services lie at limits of what constitute legal practices, RoboNet AMP becomes an identity for these systems, so that users can be able to experience what their social media apps feel like with and without automated accounts, so platforms still possess full autonomy to decide how to interact with customers that opt-in for their niche audience model. RoboNet enables a more human approach on how artificial content systems interact with real people, it raises the bar for the algorithmic accountability and the technical constraints that enable verification, quantification and qualification of these services without impacting HTTP "legacy" operations negatively, as the burden to lure customers for services with fully or partial AI generated content activated becomes incentive for a new market to be explored, not an imposition for today's Internet users.

As the RoboNet asks countries and regulators to combine efforts for its apolitical compliance, in return it offers an exciting venue for intelligent solutions regarding AI technologies. RoboNet asks for very big stakeholders to sign not for a pause on AI developments, but for an update on the Internet itself, as this route enables a more consistent and encompassing approach that is human centered by design, crafted for cooperation and even future AI challenges, to promote societal-wide benefits regarding the new era of AI content and systems. RoboNet acts as a building block for initiatives that look towards the fair and safe use of AI, it is a foundation where the Internet can grow with a plan.

Sure by having a distinct protocol comes with more mundane technical features, such as specific publication dates, a valuable asset for distribution, attribution and licensing operations covering artificially made content, where RAMP enables end user systems to clearly display this information, but this is not a job for RAMP itself. RAMP enable OS level events and it is up for apps, websites and services to decide how to properly engage with the new features introduced by RAMP. RoboNet updates Internet’s own structure, as shown in figures 9 and 10, so that users, companies and governments can understand (and isolate) AI traffic on the WWW.

 

Figure 9 - Without RoboNet, the Internet will end up with AI services/clones in every aspect of our digital lives.

Figure 10 - RoboNet RAMP allows AI content and systems to have their own expression in their own particular space, in a way that allow Internet users to interact with RAMP content as they chose to, no matter the service, while preserving today's HTTP www as is.

RAMP enables a valuable set of foundational technical assets aimed at today issues related to how AI systems interact with people on the Internet in a prosocial approach, enabling a safer and more coherent online experience, it protects Internet users from geographically disconnected regulatory pace and the lack of incentives internet companies have aimed at the misuse and abuse of their services. But that’s its limits, countries and their regulatory bodies would surely still face challenges, albeit new ones, as the RoboNet merely changes the dynamics for bad actors, doesn’t eliminate them.

But most of all, RAMP enables policymakers, regulators, academics and institutions to seize how RAMP behaves, evolves and over time analyze the ramp:// protocol characteristics in order to have this unique view on how we collectively use AI, leaving these systems to operate and evolve in their own digital pace.

9. Conclusion

The RoboNet Protocol introduces a common technical environment where artificial content origins can be reliably traced, verified, and authenticated, empowering users to discern between AI-generated and human-created information and systems. By shifting the burden of content classification from Internet Service Providers, as a protocol, the RoboNet offers a novel approach that promotes fairness, transparency, and accountability while granting semantic mechanisms that can protect human rights and freedom of expression online. Provision of the RAMP protocol asks for small political synergy and regulatory effort towards its specification and deployment, but even so, it still ends up being a political /technical effort that is orders of magnitude smaller than orchestrating common and meaningful international regulations for today and tomorrow AI challenges. RoboNet /RAMP offers an unprecedented set of benefits for a solution that can be crafted in computers, not in courts, for relatively small political /temporal cost.

New Comment
1 comment, sorted by Click to highlight new comments since:

To explain my downvote:  This isn't a protocol, it's just a URL scheme name.  There's no indication of why anyone would use it, or how it does anything differently from existing mechanisms.