New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fork multyvac #17
Comments
Instead of me continually editing above, here are more salient points. If we go our own direction, we can have independent clients that do proper serialization for other languages like node, Ruby, etc. This is sounding awfully familiar to the Jupyter kernel spec as well as sense engines... |
👍 I'm definitely interested in doing this. I think the protocol is solid enough as a starting point (and we can always extend it with additional calls if we wanted to support more advanced use cases, like streaming results). I do want to keep going and support as much of multyvac as we can, because then existing code that's written against multyvac will work against cloudpipe without changes, which is pretty cool. It looks like "cloudpipe" is available on pypi, rubygems, and npm (npm has a "cloud-pipe" package though). How does
It would be awesome to provide a bridge to Jupyter kernels. We'd get instant serverside support for a ton of languages. We could add a "result_source" of "zeromq:" and expose that in the container... hmm. 💡 |
As I finally re-noticed, multyvac isn't actually the same interface as PiCloud's |
As I continue to dig here, it looks like PiCloud had a few configurables for alternate clusters of piclouds. This is in
|
Makes me think they started with the "Let's do a pickle python code and run it remotely" idea then pivoted to "let's run whatever with a more generic API". |
From that def resolve_by_serverlist(self):
self.serverlist_resolved_url = True
try:
# Must not call send_request as that can re-enter this function on failure
resp = self.send_request_helper(self.server_list_url, {})
except CloudException:
self.diagnose_network() # raises diagnostic exception if diagnosis fails
raise # otherwise something wrong with PiCloud
for accesspoint in resp['servers']:
try:
cloudLog.debug('Trying %s' % accesspoint)
# see if we can connect to the server
req = urllib2.Request(accesspoint)
resp = urllib2_file.urlopen(req, timeout = 30.0)
resp.read()
except Exception:
cloudLog.info('Could not connect to %s', exc_info = True)
pass
else:
self.url = str(accesspoint)
... |
Welp, I got multyvac to crap out on a simple job that submitted too quickly: import multyvac
import os
api_url = os.environ['CLOUDPIPE_URL']
multyvac.config.set_key(api_key=os.environ['OS_USERNAME'],
api_secret_key=os.environ['OS_PASSWORD'],
api_url=api_url)
print(multyvac.get(multyvac.shell_submit('ls -la')).get_result()) Traceback (most recent call last):
File "pi.py", line 12, in <module>
print(multyvac.get(multyvac.shell_submit('ls -la')).get_result())
File "/usr/local/lib/python2.7/site-packages/multyvac/job.py", line 342, in shell_submit
return r['jids'][0]
KeyError: 'jids' I'll look into this later, but I think there needs to be better handling on multyvac's side. |
Weird. Sometimes. |
|
On a separate note, make sure to use |
As we went along implementing what level of spec we could glean, there are some known limitations of multyvac:
The text was updated successfully, but these errors were encountered: