Thanks to the work of Brucey I discovered how life is easier when you use the right tool, without the need to reinvent-the-wheel everytime.
In this case the problem was: how to resume an interrupted download.
It seems a trivial question. Each day we download something, with mail program, browser, applications.
In BlitzMax I wrote some utilities to connect with remote server to upload and download files. And it works very well (it’s integrated in BlitzMax Companion and in MaxIDE to check newer version of it or languages pack).
I realized that downloading a BIG file could not so easy: there’s full of error out there! A connection lost, a delay, a crash of the application.
If the file is smaller (like a txt file) there’s no problem. Just re-dowload it.
But if the files is BIG and you have limited connection band (like me!) so every KB is very precious?
You need to resume the download.
After some test I discovery that I need to ‘negotiate’ with the server (if it accepts this!) some info about resume a previous downloaded file. Too complicated.
Result: I looked at the Brucey’s modules LIB_CURL
There are so many applications that use it that this time I surrender to the idea to built everything with my forces…
SuperStrict Import BaH.libcurl Import BRL.StandardIO Local curl:TCurlEasy = TCurlEasy.Create() Local out_stream:TStream 'where to save the file Local fsize:Int 'the size of the LOCAL file Local filename:String="MiniPlayer.zip" If FileType(filename)=1 fsize=FileSize(filename) Print "File already downloaded - FileSize: "+fsize+" recover from here...?" out_stream=OpenStream(filename) SeekStream(out_stream,fsize) 'put at the last position - APPEND Else out_stream=WriteFile(filename) 'start from zero End If curl.setOptInt(CURLOPT_FOLLOWLOCATION, 1) curl.setWriteStream(out_stream) 'set where to write the file curl.setProgressCallback(progressCallback) ' set the progress callback function Local url:String="http://www.graphio.net/download/"+filename curl.setOptString(CURLOPT_URL, url) curl.setOptInt(CURLOPT_RESUME_FROM,fsize) 'set FROM where to restart the download Local res:Int = curl.perform() curl.cleanup() If out_stream CloseFile out_stream Function progressCallback:Int(data:Object, dltotal:Double, dlnow:Double, ultotal:Double, ulnow:Double) Print " ++++ " + dlnow + " bytes" Return 0 End Function
The above code simply download a file from this website. If you stop the execution (in MaxIDE with the STOP button), then you can resume the download from the latest position.
It works without problems.
Now I must test it on BIGGER files… and find a way to ‘pause’ or ‘interrupt’ a downloading.
This means also that MIAB is not completely halted 😀 – a step closer!