Proof of Individual Work (PoIW) (uPoW)

Proof of Individual Work (PoIW) or Proof of Small Work (PoSW) or Proof of Tiny Work (PoTW) or U (micro) Proof of Work (uPoW) or You Proof of Work (YouPoW)

The future is proof of work.

I want to make a search engine where you have to prove you (or someone else) do work to boost your URL in the search engine algorithm.  Here are some ways I have described to do this.

1.  Proof of Burn - Proving that you sent an amount (specified or not) of cryptocurrency such as monero or bitcoin to a unusable address.

2.  Proof of Captcha - Proving that you are "human" - in reality a proof of "human" work (PoHW).

3.  Proof of Hits - Proving that your post is popular.

The great thing about all of these methods to boost your (or someone else's) URL in search rankings is they are anonymous and don't require things that law enforcement would want to scrape out of our open source database unlike mobile or email or IP verification.

Another new proof of work is described here.  I will call it Proof of Individual Work.

4.  Proof of Individual Work - Proving that "you" did an intensive "cpu" operation.

"CPU" can be any type of machine work, but preferably cpu since graphics cards and asic's are in limited supply and also have limited use compared to CPU's.  Almost everyone on the planet has access to a cpu.

"You" can be anyone who believes in the thing (in this example; URL) to expend work to promote it.

What is the difference between proof of individual work (PoIW) compared to standard proof of work (PoW)?  PoIW can (but doesn't have to be) be done by your very own computer.  PoW is basically a socialized work.  Your computer will virtually assuredly never win a proof of work challenge.  This isn't so bad since you can trade someone who was lucky enough to win a PoW challenge and get a little bit of cryptocurrency from them which you can subsequently burn and provide proof that you did that (proof of burn) to raise your URL ranking in the search algorithm (or any other use).  So proof of burn becomes a sort of individualized proof of work that mimics PoIW.

So then what is the difference between Proof of Burn (PoB) and Proof of Individual Work (PoIW)?

Firstly PoIW doesn't require you to acquire cryptocurrency.  Why does this matter?  It is very hard to trade something you have for cryptocurrency with an anonymous person.  Since they are anonymous firstly you probably will have a hard time figuring out what they want to trade for a bit of their currency.  What this does is create a really big problem.  This problem is that bitcoin or any cryptocurrency wants to be traded autonomously so what this means is it will almost always trade for government backed currency or any other form of global reserve currency almost exclusively.  So the whole point of bitcoin becomes moot when it can only be traded for Central Banker Digits (US dollars or equivalent).  Another way to get a little piece of cryptocurrency is to join a socialized pool.  Again this is putting your machine in a situation where it is dependent on a socialized system which you may or may not want to be a part of.  Pool's are counter to the "one cpu one vote" vision of cryptocurrency and can be used to centralize control of the cryptocurrency.

So how does PoIW break us out of the cryptocurrency paradigm to prove work?  Because we can give an individual computer a task that they can complete in a reasonable time frame to prove they did work.  How do we do that?  One method is we can generate a random number of sufficient length (starting with perhaps 116 digits) where the computer in question can return to us the prime factors of the number.  Prime factorization of numbers over 100 digits is known to not gain significant speed up from gpu's or asic's because it requires a General Number Field Sieve (GNFS).  When the factors are found and returned to us we can verify that each factor found is prime with a primality test (which should only take fractions of a second for factors of this size) and validate that all the factors multiplied together gives the random number we gave them to factor.  Within under a second we can verify that they completed the work we gave them, all without having to require cryptocurrency.

Now the biggest vulnerability in this process of PoIW is the generating of the random number.  Firstly the random number may not be exactly random and vulnerabilities can be found in that.  Secondly the generation of the random number is centralized by the server so could be open to abuse of the server owner.  How can we generate a true random number?  For this we have to go back to cryptocurrency I believe to give us a true random number that everyone can agree is random.  To do this we can simply look at the latest merkle root of a block of a cryptocurrency and hash it with a hashing algorithm into the length we want (say 116 digits) or hash it into a larger number than we want then truncate down to the digit count we want.

So for example taking the monero blockchain as an example; the central authority (server) could look at the merkle root of the latest monero block, hash it with SHA-384 and convert to base 10 (decimal) which is 116 digits then present this number to the person who wants to boost their URL so they can factor it for us.

Now we have a problem.  Say 2 people are trying to boost their URL at the same time.  Hashes are deterministic so both of these people will have the same number to factor.  Again we are back to the cryptocurrency problem since this is no longer an individualized proof of work and only one person can prove their work.  What can we do?

Well we can allow people to select their own unique, one time use, nonce.  So they will, for example, take their nonce and multiply it by the merkle root of the latest monero block, then hash that in SHA-384, then present to the server the prime factors of that number.  This would work since the server can then check that the arbitrary nonce that the person selected, multiplied by the merkle root is equal to the number that the person factored, and the factors themselves can be checked for primality and that they indeed multiply to give the number.

Great.  Now we have this proof of individual work.  And as long as our database doesn't say that someone else has factored that same number and claimed the URL boost, then this person will get it, no matter how long they take to find the prime factorization.  The only downside to this is the person can continually change their nonce to try to get an "easy" number to factor - say a number where the only factors are 2's or something therefore nullifying the whole prime factorization favoring CPU's over ASIC's in the first place.  One way to get around this problem (this problem I will term "nonce flipping" or "nonce forcing" or "nonce grinding") is to require that the number of prime factors of their number be within a set range.  "Easy" to factor numbers will tend to have a high number of factors or a low number of factors.  And/or we can require that the prime factorization gives numbers with certain length factors.  So say for 116 digit length number to factor, we can require that at least one of the factors is at least say 50 digits long.  And/or we can say there must be between 7 and 15 prime factors, and/or that a certain number of the prime factors must be unique.  The downside of this is that even if the person factors the number the "right" way that the number may just not meet the requirements of the PoIW and you would have to try again.  Hopefully we can tune this so that the average time to find a solution to the problem on a modern computer is within a given time frame, say 5-10 minutes or whatever range we want.

Example parameters: (what I will likely start with and tweak if needed)

1. 116 digit number from SHA-384 truncated to 101 digits (335 bits)

2. No prime factors can be larger than 51 digits long (so an "RSA number" could work)

3. All prime factors must be distinct

For #2 this prevents "a prime number multiplied by 2 that gives the number" since it would be too fast to just find that a factor was 2 then perform a primality test on the remaining number and find it was prime.

For #3 this prevents a number like 120000000000...0000 that has a ton of (non-distinct) factors.

my estimate is this would take no more than 4 factoring attempts on average (more likely closer to 2 factoring attempts at a total of ~3 hours processing time) to find a prime factorization that satisfies all these parameters.  A fast way to cut down on non distinct factors is make sure the number is not divisible by multiple 2's, 3's, 5's, or 7's before trying to factor farther.  If it is, change your nonce to roll a different number to factor.

will take around 14 hours to factor a 116 digit number on a modern ryzen

if we truncated to 100 digits it would be about an hour

101 digits 335 bits, ~1.5 hours

Nice, so now we have an individualized proof of work so that a person can prove they completed an intensive CPU process to say they deserve their URL to be boosted.  Well we still may have a slight problem.  Say I don't have the only search engine index that works like this.  The person can simply complete one of these proof of works and then get his URL boosted on my engine and also on another engine at the same time.  So now the proof of work is not really valid since only say half the work was actually done for my search engine.  How do we make sure that the work is only done on our search engine?  One way is that we (the search engine or any other organization of course) give our own nonce.  So there would be our nonce, the person's nonce, the merkle root of the current external system state like current monero block, and the SHA-384 hash of all of that.  This isn't so bad of an option because the person can still keep re-rolling his own nonces as required but chances are the centralized nonces will not be the same between my search engine and the other guy's search engine.  Also there is no way for us, the central power, to give "good or bad" nonces to someone to effect how easy it is for them to factor since both the merkel root and the hashing algorithm and the person's own nonce are out of our control.

Also just to be sure that our given nonce doesn't ever line up with another search engines given nonce, we could set a time limit of say 1-7 days for the answer to be found after we issue the nonce for the current merkle root. The longer we can allow the person to have, the better to allow a person to use something like a raspberry pi if they want.

So I think we have a workable solution for Proof of Individual Work.  Of course this can be done with typical sha-256 or other hashing algorithm instead of the prime number factorization but I think we should prefer algorithms that are hard to speed up with gpu's, fpga's, and asic's.

how long to factor a number

instructions per second of various processors

Some example factorizations of 50 digit numbers

near half-digit gives 7 distinct

nearly half digit gives 5 distinct

same 6 distinct

2 distinct, does not satisfy req


OptEngine.org is an opt-in search engine

OptEngine:  Because Internet.  YouCrawl.  You Crawl. uCrawl.

OptEngine.org is an opt-in search engine and open source database/index where you have to submit URL's to the search engine to index.  Why is this important?  The majority of search engine potency is how often and fast and deep a search engine can crawl the web for content.  The reason why small engines can't compete with google is they are not as good at crawling.  A ranking algorithm is easy to design and it is easy for anyone to compete with google's ranking algorithm but not their crawling capability which uses likely billions of dollars of servers to do.  If you can have humans crawling the web for free, less ads will be needed to support the engine and be easier for small engines to compete against the big ones.  Not only that, but more importantly is that humans get to decide what pages are most important and are worth the time to index which will limit the pages to the highest quality pages only, not just every page that exists.  Also this database will be open source and always available for download so any and every search engine can use the OptEngine database/index for free as a basis or part of their search engines index.

OptEngine may include an option to where your page will be archived at archive.org or similar service.

OptEngine will likely not respect robots.txt since you are opting-in by telling us the URL to index.  While it is true that someone who is not the owner of the URL can tell us to index it, if the owner really doesn't want his page to be seen he can keep the URL private.  We are not the ones crawling the web, you are.  Why not just respect it anyway? With the rise of social media and premade sites like Blogger and Wix, these sites can include robots.txt that their customers either don't know about or have a hard time changing or cannot change.  This is why we feel it is important to not respect robots.txt so everyone can get their content to the world regardless of the digital ghetto that they publish on.  Below we outline our own Opt-in and Opt-out options that can be used instead (see colored text).  As far as Nofollow links, we do not use links to rank pages so nofollow means nothing the OptEngine.

OptEngine will start as english only and hopefully will branch out to other languages over time.

OptEngine will require you burn any amount of Monero (XMR) to get your page into the index.  Why is this?  The reason for this is that we don't want bots spamming the database with either the same URL many times or spamming worthless pages in order to bog down the index.  Cryptocurrency is a proof of work method to prove that you are doing work to earn it.  Burning means you send the monero to a provable unusable address so no one can use the monero.  Isn't this wasting monero?  Yes and no.  Yes because the monero can never be spent, but also no because it is impossible to spend so it increases the scarcity of monero so everyone's monero is worth more.  Why monero and not bitcoin?  First because the transaction fees for monero is less (typically well under 1 cent) and also because monero has a tail emission so we will never fully deplete monero from circulation no matter how popular OptEngine gets.  Every URL you want to index will require a separate burn transaction.  Depending on the current transaction fees on the monero network you could get your link posted for well under 1 cent.  A new burn address will be made every roughly 10 XMR that accrues in the address (about 20,000 pages in today's monero value) so that these burn addresses don't become a lucrative target for future quantum computer hackers.  Or we can use another method for proof of burn like this one.  If monero transaction fees increase above 1000th of the lowest average monthly income country in the world (currently $0.041 since DR Congo is $41 a month average income) then we will consider adding other cryptocurrency options if they have a tail emission.

In addition to burning Monero dust (dust means an insignificant amount of monero), completing a CAPTCHA or equivalent will also be required to help prove that not only machine work was done but also human work as well.  You can do up to 10 URL's at once though so you don't have to complete a captcha for every URL as this would be burdensome for trying to get your whole site indexed in a reasonable time frame.  Just note if you do 10 URL's at a time though you will need proof of 10 separate burn transactions.

OptEngine search will rank results based on a few factors.  These factors are ordered in terms of importance, with factor 1 weighted heavier than factor 2 and so on.

#'s 1-4 are all the index fields and they will be scraped from the site automatically when the URL is presented for indexing and proof of monero burn and captcha is presented.  However upon launch the scraping algorithm will likely not be complete so the user will have to manually input fields 1-4.  This shouldn't be hard as they can just copy and paste from their page.  It will be on the honor system that the info you provide is what is in the page itself.  Very few I think would want to take the time to mislead people about the content of their page but I'm sure it will happen as a prank from time to time and you can downvote that page.  When the scraping algorithm is complete and included into optengine, then you will have the option to either automatically scrape the data needed or you can input it manually, giving you maximum control on how your page data is stored.  What prevents people form uploading the same content under different url's and using other pages data to fill in the index fields?  The monero burn and also the captcha make this sort of trolling labor intensive and downvoting is another way to discourage this practice.  But the benefits of using Human Intelligence (HI) for tuning the field input to better help the searcher find what they are looking for is worth the risk of misuse in our mind.

1: Categories.  Categories are the 3 longest words (or first 3 words) in your <title> tag.  Each word is independent and no exact phrase matches here.

2: Title.  Title is the 20 longest words in your <title> tag.  Each word is independent and no exact phrase matches here.

3: Summary.  Summary is the first 1,000 characters in your content in your <body> tag.  Exact phrase can be taken from here.

4: Text. Text is the text between 1,000 and 30,000 Characters in your content in your <body> tag (so up to 29,000 characters). Exact phrase matches can be taken from here.

Those 4 things at launch will be all you can search using typical boolean queries, the following will not be included at launch but will be added later as possible.

5. Sorting Algorithm.  If results rank the same in the above relevance ranking (or if not and just to provide more customization of the results) the following criteria can be used to further filter or rank the results by the user.




Popularity (hit count)

Upvote # (every IP can vote up or down once)

Upvote %

Burn Amount (amount of Monero burned).  To gain in rankings in this category you can burn more  Monero than is required to index your URL.

OptEngine Rank - Our best guess at ordering.  Will combine Category//Length//date//popularity//upvote # & %//Burn amount - each weighted according to our best theory and those percentages will be open source.

Design your own Rank - Tune all the above factors into what you think is the optimum ratio's.  This can look something like #AABCDD6742E3 where you can copy and paste your algorithm code in future searches without having to sign in or anything.  There will be no IP Address saving (except for voting on post rankings), no search saves, no sign-in's, or any other info gathering at all whatsoever.

There will also at some point be a special button next to a search result that says "links".  What this does is show a list of webpages that have linked to the particular result you are considering.

So at launch here is what OptEngine will be.  A page with a search box and "search it" button; and a url input box with a "Opt it" button.  "Opt it" is short for "index my url using OptEngine".  So Optengine will be a simple search site and url indexing site.  When you enter a url and click "Opt it" then you will be presented with a page that has 5 boxes.  The first box will ask for your monero transaction key or otherwise key that proves that you burned monero; the second box will ask for 3 keywords aka categories to describe your page; the third box will ask for 20 keywords aka title of your page; the fourth box will ask for 1,000 characters max that is the summary of your page; and the fifth box will ask for 30,000 characters max that is the content of your page.  Complete the captcha and your url along with the data you provided will be

The entire index will be publicly available and open source and always avaliable for full download like commoncrawl.org.  We hope other search engines use OptEngine Index as a part of their search algorithms.

What happens if a webpage is submitted for indexing multiple times? This is a tough question.  We want to make it so the most recent indexing replaces the previous indexing.  We are not an archival service which is why we want to partner with archive services to save peoples pages permanently on those sites.  What we may do when our scraping algorithm has been launched is make subsequent updates require our scraping algorithm to make any changes to the index data so someone can't troll other user's page indexes.  Another option we have is for us to require an "opt-in" phrase on their page such as #Opty (this is currently the preferred option).  As long as a page has this somewhere in the page, the page index can be updated manuallyIf an #Opty is not present, then the page index can be updated only by our scraper if we are asked to do that but it can be initially indexed manually without an #Opty presentIf you do not want the scraper or manual change to update your URL's index, then you can use the phrase #OptOuty in your text somewhere, however to prevent abuse by digital platforms if an #Opty is present it overrides an #OptOuty, nullifying it and allowing manual and scraped indexing.  Another option is allow users to provide an email address to be notified if any change is made to one of the pages they want to receive notification about.  We don't like this option though as it will make our database a target for law enforcement.  Even upvoting that would need to store IP addresses is something we really don't want to do if not 100% necessary.  Not sure what will be required at this time.

How will OptEngine store and make all this data easily accessible?  We will begin on Amazon AWS S3 storage site.  This site allows 1 TB of data storage and access for only $40 a month (the same cost of having an amazon seller account).  With 1 TB of data we estimate we could host up to 1 Billion page indexes of the internet (assuming most page index's will be around 200 words or 1000 characters - about 1 kb).  Google currently indexes 18 billion pages so we believe that 1 Billion may be the max we could hope for since only the highest quality sites will likely be indexed by our site because it takes work for someone to get it indexed.  This means we estimate our maximum hosting cost to only be around $40 per month.  Talk about a good deal!  In the future we want to have this data replicated around multiple hosting platforms including our own in multiple languages so our costs might raise from this by up to 100 fold.  At that point though we would be so widely adopted and used that we would be shocked if we couldn't raise $4,000 a month to cover costs by basically hosting the entire useful internet!

How will OptEngine.org make money to support itself?  It will accept donations and also may sell merchandise.  Also it may sell ad spots at the bottom of the page in simple links.  These links may be ranked by the monthly donation amount of the sponsor and the maximum allowed donation for sponsporship could be set somewhere around $100 per month just so the competition stays under control.  If more sponsors contribute the same about, pririty in ranking will be given to those that have been supporting the longest.  OptEngine will never boost search ranks or show ads within search results or show banner ads or popups or even sidebars.  Sponsor spots will only be text links 12pt font and a maximum of 20 characters long and always at the bottom of the page, and it will never be required to scroll down that far to use the Engine in it's fullest.

"Free To Use" and "Free to Reuse" Physical Goods Business Model

We have heard of "Free to Play" in software and has gained significant traction and market share.  Free is taking over the world like I have predicted in the FreeContaigen.  However Hardware or any physical object as opposed to software, is slow in adopting Free and Open Source Principles.  Open Source Hardware is a thing, however it has not nearly taken off as quickly as open source software.

I am proposing a new business model called "free to use" and "free to reuse".  Just like the name implies, Free to Use allows your customers to use the product for free, forever.  Free to use differs from "Free to Try", like Shareware and free samples, because there is no end date; free to use is a permanent business model.  Also I want to head off any "free; just pay shipping" scams by making Free to Use and Free to Reuse to be fully free including shipping.  Another alternative business model can be "at cost" where you sell something at the exact cost it takes you to make and ship it, but that is not discussed here and in my opinion is not a good business model since it does little to garner either public support nor profit.

Free to Use by definition is not open source.  Free to Reuse is free and open source (recipe/process given) which is preferred since most customers currently are very discerning and want to know exactly how something is made to see if they agree with the process before purchasing or using it.  

Free to Reuse is both free to use and also fully open source process to make it.  This allows customers to feel confident in the product they are using and also gain the confidence to know that if the business ever goes away that they can continue to make the product themselves or adapt it to their own specific use parameters.

How do you stay in business with a free to use or reuse product?  It is easier than you think.  Customers now are very accustomed to voluntarily supporting people or companies that they believe in.  By using a free to use or free to reuse business model you are giving customers a gigantic reason to believe in you.  Whether this is by accepting donations, allowing customers to send you gifts to a PO box or similar, or having other products or services for sale in addition to the free products.  One good way to do it is to have free and paid versions of each product you have, typically the small and easy to ship would be free whereas the larger and/or fancier version is paid.

This is Givism and will eventually take over the entire earth as seen in Daniel 2:44 and is true generosity and care for human life.

Free to read and free to use publications (again that is digital though and my model is physical products)


Could Ceres be an ejected moon? From Earth even?

It looks like scientists think it was possible we had a second moon at one point around 1200km in diameter.  I have also wondered if Ceres, the dwarf planet, is an ejected moon of some sort.  Ceres sits at just under 1000km in size putting it in the likely size range of earth's second moon.  Another thought is that ceres could have been an old moon of mars but both of mars' moons are under 30km in size so having a 1000km moon is a stretch but not impossible. 

According to my theory on gravity, Mars probably used to have much stronger gravity so could have held a larger moon.  My theory on why the earth's moon is drifting farther away is that the earth's gravity is decreasing as well. 

Why do I think the gravity of a planet can decrease while it still has the same mass?  Because I don't think gravity is mass dependent, rather charge dependent.  Earth and all other planets and stars are dynamo's that generate an electrical charge.  Gravity is primarily an induced charge attraction (along with some centripetal effects from rotation).