Transcode Tests

How can that be if the worker gets paid $0.01/30 second work packet ($0.02/minute) but the publisher is charged $0.025/minute of video? Where does the $0.005/minute go if not VideoCoin? Don’t get me wrong, i think this is fair and deserved, I’m simply stating that the link you provided does not represent this commission to the network operator and it should for full transparency.

@Ram_Penke

Another day, another test. This is the first instance where the publisher is charged but the work was not completed and there was no output source. For reference this was a multi segmented video upload from local storage. It was sent to 3 workers (genesis, BDC, BluBlu).

Explorer showed that all three workers were busy with the work. BluBlu’s and genesis pool completed while BDC stalled at “Busy”. After a few minutes the stream ended up failing but publisher was charged. My question would be if 1 segment of the video fails on encoding, all other segments are still charged? Could there be a backup for the failed segment if a worker cannot complete this so that the entire video could be completed?

Here are snips of BluBlu and Genesis pool publicmint explorers showing the payment for the job while BDC’s was unpayed.


This brings up another issue in that in my tests it seems as if BDC is mostly running into stalling problems that you mentioned. How would a worker identify the issues with their specific worker on a failed job? I’m assuming that this would also lead to a slashing event when that is implemented. What if the issue is not on the worker end and has to do with an external source such as a bug?

@Santiago_Velez
All these transactions are recorded on blockchain.
Blockchain event AccountFunded indicates amount transferred to the worker.
Blockchain event ServiceFunded indicates amount retained with the network.

@BluBlu

Good exploration of the network. There are multiple items to be consider here. VideoCoin network operates at API level and Console is Web UI application on top of API.

Payment for a Segment:
At API level each segment is paid only if it is successfully encoded and made available to the publisher. Publisher can retrieve this segment and can pass it to a CDN or perform further processing or save it etc. From this point of view, all the segments that are paid are available to the publisher through API.

Failed or delayed segment transcode:
Slashing when enabled, is expected to enforce the discipline on workers.

Backup for transcoding failed segments:
It should be there in the video-infra. I will check and get back to you confirm if it is enabled.

Finally the failure of the stream reported by Console:
I think it is a bug in the console application, if all the segments of the file are encoded, but fails to identify the completion. We will look at it and update you.

@BluBlu
Feedback on your question regarding network bug vs faulty worker:

  1. Docker containers are expected to provide uniform execution environment.
  2. As part of enabling slashing, a performance metrics will be obtained that basically keeps track of successful transcodes and failures.
  3. When a reasonable number of workers are running, bugs will pop-up across more workers.
    Migration to the UI based worker control panel(under development) will make it easy to alert any worker issues.

In summary, slashing will happen only to workers that are intentionally made to fault or attempting a form of attack on network. Slashing is mainly triggered when a worker submits a false proof-of work. Crashes and downtimes may be used in lowering rank of a worker in selection.

This can be discussed more while introducing slashing.

@BluBlu and others trying the transcoding:
The minimum required credit to run the transcoding is set to $1 from today.

Apologies if overloading with similar requests. In this instance All segments but one were completed and payed. The last segment has been stalled for over 30 minutes while the worker is stuck in “busy”. This is preventing a full video output. As Ram stated, the payed segments are available to publisher but not through the current console. Is there a time cap on a worker to finish this work before it gets sent to another worker to complete?



End result was a 40+ minute wait before last worker came out of “busy” to “idle” and stream failed.

@BluBlu We are looking at the issue and will resolve it soon.

Gave the new File output instead of HLS a try and everything worked amazingly.

Successfully compressed an 8K source file and a 3D video file and both gave me a working mp4 output link that was downloadable.

EXCELLENT

1 Like

@BluBlu
Update on some of the issues that you reported earlier:
Limitation of 2 streams per account: There is no limit, but Network checks to see if there is enough balance in the account before starting a stream. With old $10 limit, you might have hit the condition. With new $1 minimum limit, should be able to start more simultaneous streams

Successfully transcoded 8K source file (2.2GB) to mp4 that ended up being 119MB.

1 Like

Congrats @BluBlu
Write a blog-post and share your experience.

@Ram_Penke

Saw these errors today in transcode tests after each segment of a stream.
Here is the stream ID: 6428436814824408381

The entire corresponding log of this stream on my worker is in our sync folder with the suffix: 07-18-20_241EST_CHUNK_ERROR

time=“2020-07-15T11:24:55.5238128Z” level=info msg=“segment has been uploaded” cid=15e6bfc0-f126-412b-86ed-7b1955e0e4b6 segment=2 task_id=ae10a793-c927-4f02-70ea-e29dab3199ad version=v1.1.1-pe-717f3f7
time=“2020-07-15T11:24:56.2982308Z” level=error msg=“failed to get in chunks: no contract code at given address” cid=15e6bfc0-f126-412b-86ed-7b1955e0e4b6 segment=2 task_id=ae10a793-c927-4f02-70ea-e29dab3199ad version=v1.1.1-pe-717f3f7
time=“2020-07-15T11:24:56.3005996Z” level=error msg=“no contract code at given address” cid=15e6bfc0-f126-412b-86ed-7b1955e0e4b6 task_id=ae10a793-c927-4f02-70ea-e29dab3199ad version=v1.1.1-pe-717f3f7
time=“2020-07-15T11:24:56.3006402Z” level=info msg=“task has been completed” cid=15e6bfc0-f126-412b-86ed-7b1955e0e4b6 task_id=ae10a793-c927-4f02-70ea-e29dab3199ad version=v1.1.1-pe-717f3f7

@BluBlu
We will check this error.

Tried a different test today with different variables. Parameters were:
10 streams in parallel
7 minute 30second 3D file clips uploaded by URL
Outputs all the same (FullHD File)

Tried to start 10 streams at once on a 7:30 3D clip from URL upload. 6 of them uploaded successfully, 3 failed on upload start, 1 uploaded extremely slowly. Once 6 of them uploaded successfully, the network only started processing 1 stream at a time which means there’s a bottleneck somewhere. My guess is in the segmenting worker which right now would be a centralized Videocoin source? Even if there is lets say 1000 worker nodes spread throughout the world wouldn’t the point of failure and limiting factor still be this centralized segmenter?

Also curious to understand why 10 streams with the same parameters end up in different results at vastly different speeds?

@Ram_Penke Is the network segmenter a bottleneck? Why can’t workers be anonymous segmenters?

@BluBlu @Santiago_Velez
Interesting tests. By design ViedoCoin network auto scales. So it is less likely the segmenter caused the issue. I will try to reproduce and pass the info to our video-infra team. Could you please share the URL of the clip. Did you verify that the site hosting the test clip is not metering the download ? I see this issue with many sites, that one or two downloads are faster then it slows down.

here is the URL for the test file

Even after 6 of them showed as successfully uploaded, the network still only worked on 1 stream at a time before moving onto another stream. Possibly its with the number of workers available on the network? This clip would be 16 segments per stream, I will reproduce a test with a smaller test file that would have less segments per stream.

@BluBlu
The down load is very slow from this site. Could you paste the movie clip url that you pasted in the VideoCoin console url-upload input. The above link is taking to a web page and forcing file download from the browser.

that is the URL I pasted into console.

My new test I chose file upload instead of URL and the results were almost identical.
The file uploaded is here https://www.demolandia.net/x6k5.
This file is only 6 segments per stream so it isn’t limited by total number of workers available.

1,3,4,5 ended up finishing
2 uploaded 100% then failed
6,7,8 failed on start

1,3,4,5 were worked on one at a time until the previous one finished.

@BluBlu
I did some testing and suspect the the site that you are using is causing the issues that you have seen. For example when you try to download a file multiple times from the site using your browser: you may see some times the download is fast while some times download takes very long time (which I suspect causing timeout in the ingest).

You can check simultaneous operation of the segmenter using two files. Prepare both the streams for upload and start the streams. You can monitor the events in the explorer or VideoCoin console and see the simultaneous execution. If you have enough upload speed, you can extended it to multiple files. There is no difference in handling file or url upload. Segmenter starts after file is available.