Skip to content
This repository has been archived by the owner on Jul 16, 2020. It is now read-only.

Weekly Meeting 2017 01 05

Marcos Simental edited this page Jan 19, 2017 · 1 revision

Full IRC log

kristenc: #startmeeting weekly meeting
ciaomtgbot: Meeting started Thu Jan  5 17:00:41 2017 UTC.  The chair is kristenc. Information about MeetBot at http://wiki.debian.org/MeetBot.
ciaomtgbot: Useful Commands: #action #agreed #help #info #idea #link #topic.
ciaomtgbot: The meeting name has been set to 'weekly_meeting'
kristenc: #topic role call
kristenc: o/
jvillalo [[email protected]] entered the room.
btwarden: o/
mrkz: o/
tcpepper: o/
jvillalo_mobl [[email protected]] entered the room.
jvillalo_mobl: o/
markusry: o/
jvillalo left the room (quit: Client Quit).
kristenc: I'm going to give it 2 more minutes
albertom: o/
carlosag [c037362c@gateway/web/freenode/ip.192.55.54.44] entered the room.
kristenc: #topic Opens
kristenc: I have one
mrkz: I do have one as well
kristenc: anyone else?
kristenc: ok, i get to go first :)
mrkz: please :)
markusry: I have one too
anunez9 [c037362c@gateway/web/freenode/ip.192.55.54.44] entered the room.
kristenc: last call for opens...
kristenc: #topic Agenda
kristenc: this is my open
kristenc: now that we are not going to be doing bug triage in this meeting, I'm wonder whether people have a sense of how we'd like to use this meeting time.
kristenc: we can bug scrub if we want to, but I'm not sure.
kristenc: anyone have any opinions?
rbradford: topics like: 1. whether specifying volumes on the command line should be additive or replacement and documenting that 2. deciding on -> GiB everywhere for volumes
***tcpepper views it as a great opportunity for broader design discussion than happens in code reviews
kristenc: ok, so decision making.
albertom: +1 rbradford and tcpepper
rbradford: kristenc, architectural discussion leading to decision making
tcpepper: decision making and rbradford's 1 & 2 are more implementation focused specifics than I intended with my comment
mrkz: makes sense to me to discuss ^ in here
tcpepper: as rbradford follows up...the pre-decision making architectural discussion.  philosophical discussion, not implementation specifics..
mrkz: that way community can give us input for decision making
kristenc: ok - we can do that. These types of things will need to be put on the agenda ahead of time in order for us to be able to spend time thinking about them. So, if this is the way we are going to go, I'd like it if people could submit agenda items to the mailing list ahead of time. Of course we'll continue to do opens.
tcpepper: mailing list or wiki too
kristenc: wiki only risks getting overlooked, so I would say definitely mailing list, wiki would be great in addition.
tcpepper: I thought our agendas were documented in the wiki ahead of each meeting
kristenc: and if there's nothing on the agenda, we'll meet briefly for opens and thats it.
kristenc: yes - I'll put the stuff on the mailing list in the wiki
kristenc: but if there's a topic that requires Deep Thought, it'd be nice to get more than 30 minutes notice, which is about when I look at the wiki.
tcpepper: good point
tcpepper: in practice it may be that the first time something pops on the agenda it's superficial discussion to plant the seed for the follow up meeting, giving a week of notice then for detailed thought
rbradford: can you subscribe to that wiki page?
rbradford: otherwise i'm not going to remember to check it
kristenc: #info we will use this meeting for architectural discussions going forward
kristenc: #info agenda items should be submitted to mailing list if possible.
kristenc: rbradford, it is a git repo, is there a hook somehow that github gives you?
rbradford: kristenc, oh i thought you meant the github wiki page
***mrkz likes tcpepper's flow idea
rbradford: https://github.com/01org/ciao/wiki/Meetings#weeklymeetings
kristenc: rbradford, I do mean that github wiki
kristenc: it's a git repo underneath
kristenc: ok - anything else you would all like us to see done with this meeting?
tcpepper: administrivia like making sure folks know who are the gatekeepers
tcpepper: and any key things happening in the week, like maybe a PR merge slow down to focus on getting some particular thing in
kristenc: argh - that reminds me that I forgot to make a new gatekeeper list. Today starts a new gatekeeper schedule. I will do it really quick.
rbradford: well i know it's me :-)
tcpepper: discussion of PR conflicts and ordering...two people working in an area trouncing each other a bunch and making a lot of work for rebasing
kristenc: rbradford, yes - we need to hire another person in the UK to make this more fair :).
tcpepper: haha
pixelgeek [[email protected]] entered the room.
tcpepper: I'd like the meeting to feel like if I was new and attended that I'd have some sense of the pulse of the project..what's happening
tcpepper: or had been gone for a while and came back, I'd get up to speed on things
kristenc: ok - we can include a summary of what is going on with development etc.
tcpepper: if that pulse stuff is in themeeting, the meeting minutes become informative about the project
mrkz: that's actually a nice idea
mrkz: makes it easier for newcomers to catch up faster
kristenc: #info we will also use this meeting to relay development status and updates, as well as logistical details
kristenc: This will require a bit of preparation on my part, but I'm up for it.
kristenc: anything else?
kristenc: ok, so next meeting we'll start this and of course if anything stops working for people just bring it up.
kristenc: my open is closed now, let's move to mrkz
mrkz: nothing coming to my mind atm, but I guess we could add/tune topics to cover as we continue having the meetings and noticing new needs to catch up @ meeting
mrkz: thanks kristenc
mrkz: so I just updated the PR I'm working on to support non-admin image service usage by modifying the identity wrapper in ciao-controller; but I hit wall a bit and I did comment about it here https://github.com/01org/ciao/pull/981#issuecomment-270696278 If you could take a look just to check if you're Ok with the path this is going into I'd appreciate that :)
kristenc: #topic non-admin image service
kristenc: #info everyone please review PR #981
kristenc: mrkz, I'll take a look later today, I'm still trying to chase these release failures we're getting.
mrkz: so TL;DR: got image service working, but as I repeat some stuff that already exists on identity wrapper, I would purpose a rewrite to get it in better shape
kristenc: mrkz, does your PR contain the rewrite?
mrkz: kristenc: no rush, I'll continue to work as is atm in the meantime to see how can I get out of the trouble while still getting it to work and not break what it does today :)
mrkz: kristenc: not yet, I'd prefer to get feedback prior to that, because maybe better ideas to fix would come up
kristenc: ok
mrkz: so that's basically the reason I ask, if everyone is Ok with current path, I'm happy to continue hacking that piece of wrapper :)
kristenc: markusry, did you have an open?
mrkz: so that's all from my side, so we could move on to next open :)
markusry: Yes
markusry: It's about goperhcloud
kristenc: #topic gophercloud
markusry: or goperhcloud rather
markusry: I give up
markusry: Anyway, the maintainer doesn't seem to be very active
kristenc: I noticed
markusry: I haven't seen any comments from him for almost 2 months
markusry: PRs are still being submitted but nothing is being merged or reviewed
markusry: There are currently 48 PRs pending
kristenc: I've been wondering if we could discreetly enquire if there is something going on with that project.
markusry: The last commit merged was Nov 15th
markusry: and last comment from the maintainer was Nov 20th.
tcpepper: the maintainer looks to have dropped off the internet
markusry: It's not clear how to ask apart from entering an issue
markusry: There's no irc channel.
mrkz: no mailing list either?
markusry: There is a rackspace irc channel so we could try asking there
tcpepper: he's previously been in #rackspace and #openstack
kristenc: markusry, internally maybe we could see if anyone has any contact?
mrkz: I know someone from the OSIC working on openstack
markusry: Anyway, the reason I ask is that we can't update gophercloud at the moment as head is incompatible with ciao
***tcpepper just asked in #rackspace
kristenc: what do you mean incompatible?
kristenc: meaning we need to update our code to move to their new api?
markusry: They changed the way dates are handled.
markusry: Before they were just treated as strings
tcpepper: https://github.com/gophercloud/gophercloud/pull/190#issuecomment-270438534
markusry: Now they are parsed
markusry: But parsed incorrectly
markusry: So even though ciao generates valid dates
markusry: gophercloud can't parse them
kristenc: ah - they have a bug that will never get fixed...
kristenc: well - I know we were hoping to update to at least get the parts of the identity api that were merged (list I think?), but doesn't seem worth it if it's not buying us much and it has this bug.
tcpepper: we may have to fork and apply patches in our fork and revendor to our fork (ugh)
markusry: Yep, we may have to do this.
kristenc: before we get that desparate, let's see if we can figure out what the deal is with rackspace. I agree we might have to go there.
tcpepper: I'm in a conversation in #rackspace right now
markusry: Okay.  Sounds good.
kristenc: good, thanks.
kristenc: they have a lot of consumers of that project, I'd be surprised if they let is die.
kristenc: 47 PRs outstanding means lots of contributors
rbradford: well, if they didn't get internal funding..
kristenc: true. fingers crossed.
tcpepper: the move to a non-rackspace top level github implies something's up
kristenc: yes.
markusry: It's pretty widely used
markusry: https://godoc.org/github.com/rackspace/gophercloud?importers
tcpepper: if there's something going on the community of users/contributors will reform around the new repo and get it moving here eventually
tcpepper: that's just awkward when a current maintainer appears mia
kristenc: are we done with our meeting?
tcpepper: the particular oddity to me is a PR where he said he'd merge it that week and that's where he seeming went offline
tcpepper: rbradford: brought up two topics for design discussion ^^
kristenc: would you guys like to discuss for a moment without me? I need to step out for a moment.
tcpepper: yep
***tcpepper votes for the seemingly easier discussion ;)
tcpepper: MiB vs GiB
mrkz: I'd say that MiB makes sense for images and GiB for volumes (as albertom could have mentioned) :)
btwarden: Does the API require integers, or are floats supported?
tcpepper: integers in openstack api
carlosag left the room (quit: Ping timeout: 260 seconds).
***mrkz brb
rbradford: lets go with GiB
rbradford: i think that aligns us with gcloud/aws/open stack
mrkz: back on keyboard
kristenc: I am back.
kristenc: #topic should we report disk usage in MB or GB
kristenc: I have no disagreement with GB, but can you update all our internal stuff so we just start using GB everywhere?
rbradford: yes
tcpepper: rbradford: do I recall correctly that if I make a snapshot and it's unchanged vs its source, rbd reports it as zero size?
rbradford: rounded up GiB
rbradford: tcpepper, that's the bug I started to fix
tcpepper: hmmm
tcpepper: why a bug?
rbradford: tcpepper, it's zero because it's not filled in
tcpepper: oooh
rbradford: tcpepper, not zero < 1GiB
markusry: launcher is currently reporting GB and not GIB in Stats
tcpepper: I see
rbradford: markusry, we should fix that
rbradford: markusry, Open Stack did get that right in the API docs and says that it's in GiB
markusry: Shouldn't be too hard
tcpepper: but...a snapshot is just metadata right so it is ~0 when it starts, and still later if it's not modified?
rbradford: tcpepper, ah, i see what you mean.
tcpepper: if I change a few bytes do I get charged for just the changed blocks in whatever is the block size?
kristenc: #info ciao will move everything to be consistent with all the other cloud services (GiB)
rbradford: tcpepper, let me start querying it and filling it in from ceph and see what happens!
tcpepper: rbradford: sounds good
kristenc: #action rbradford to handle this change
kristenc: we have 4 minutes left.
tcpepper: rbradford: what about us internally using a float for the GiB?
kristenc: we can go over today by 15 minutes if you want.
rbradford: tcpepper, nah, i don't think that's worth it.
tcpepper: ok
kristenc: rbradford, is 20 minutes enough time to cover your second question?
rbradford: kristenc, yes
rbradford: (i hope)
kristenc: meeting bot will leave at 10:15 and I can't stop it.
***tcpepper hopes the discussion is broader: of workload definitions vs command line instance add
kristenc: go ahead then.
tcpepper: ie: not storage specific
kristenc: we can keep talking after meeting bot leaves, it just won't be archived.
rbradford: the discussion came about from https://github.com/01org/ciao/issues/974
mrkz: sorry, have to drop now
rbradford: manohar thought that specifying volumes on the command line would override those from the workload
rbradford: tcpepper believed they should be additive
rbradford: we should decide what it should do and where to document that
kristenc: meaning specifically, if a workload specifies a boot volume, and you specify another on the command line (boot volume)?
rbradford: tcpepper, there is no way to specify that it is a boot volume, right?
rbradford: (there is the ambiguous boot_index=n)
tcpepper: yeah there's not a "bootable" boolean.  but boot_index == "none" or a negative int is supposed to make it not bootable.
kristenc: that was what I thought - any value in boot_index implies bootable.
tcpepper: but...how do you truly enforce any of that in qemu, and without reaching into the image's boot loader, boot config, kernel, etc.
tcpepper: the only way I see would be to not attach one or the other
tcpepper: which feels weird to me
tcpepper: but.."I asked for something contradictory" is bound to result in wierdness either way
markusry: You can use the pci addresses
tcpepper: do you know in what order the bootloader and/or kernel enumerate the pci busses?
tcpepper: bios/bootloader/kernel
rbradford: kristenc, i would think boot_index=1 would definitely NOT be the main disk
kristenc: so is this the specific use case we are trying to address? a user wants to use a workload definition and also specify a boot volume on the cmd line? or is it a different use case?
markusry: If we can't use the pci addresses we have a broader problem
tcpepper: one other wrench in the works here...I get the impression from things on the interwebs that boot_index may or may not consistently be treated as a 0 indexed range versus 1 indexed.  1 may be the main disk in some implementations
markusry: as this is how launcher currently orders the volumes
markusry: And the only way I could figure out how to do this.
tcpepper: I think there's a broader problem and it's not vm or ciao specific.  I think this is a simply a complexity of how computers boot.
rbradford: markusry, let's not worry about launcher yet! we still have controller side problems :-)
rbradford: controller in this case created a new volume from the image
rbradford: which is not what the user expected
kristenc: rbradford, in this case, the workload specifies an image ID?
markusry: I was replying to tim's question of how you can enforce boot order in qemu
kristenc: but the user puts a boot volume on the command line?
rbradford: kristenc, yeh it was a standard workload
jvillalo_mobl left the room (quit: Quit: Leaving.).
tcpepper: so in a way this amounts to "boot me an instance of the ubuntu workload, from this fedora image"
kristenc: it seems intuitive to me that if the user specifies a boot volume on the command line it would override the workload's image ID
rbradford: or boot me the ubuntu image with the same volume I had before.
rbradford: kristenc, i don't think we can know it's a boot volume
tcpepper: why would one do that versus create a workload for the desired thing?
kristenc: tcpepper, I was just typing that!
kristenc: heh
tcpepper: esp. now that we have the ability to create workloads...
rbradford: "Boot index needs to be specified properly per instance, and there needs to be exactly one block_device with boot_index set to 0. The nova API will inforce this and the boot command will fail if boot indexes are not properly set for all block devices required for an instance. If using --image or --boot_volume - this will be automatically set by the nova client, if using the --block-device syntax - it will need to be specified."
tcpepper: we need to write down how one is expected to typically use workloads and start instances
tcpepper: rbradford: is that current docu?  link?
jvillalo_mobl [[email protected]] entered the room.
rbradford: https://wiki.openstack.org/wiki/BlockDeviceConfig#Boot_index_2
tcpepper: I think that's old
tcpepper: "NOTE: This page was originally written around the time of the Havana release in 2013. The code may have changed somewhat since then and should be considered the final authority on the current state of the functionality."
tcpepper: it didn't quite match with code in my looking
rbradford: kristenc, my concern is how simple the the approach is, either the volumes on the command line a.) override all those in the workload or b.) are additive but the workload boots from whatever it would normally
***tcpepper feels b.) is the more intuitive
kristenc: a) seems like the most intuitive to me actually.
rbradford: i don't want c.) we munge it so that if you get the syntax right it replaces your boot, but the second volume comes from the workload and the third from the command line
kristenc: why would I specify a boot volume if I wanted the old one?
tcpepper: and would better match with what I hope becomes our suggested / documented usage or workloads vs instances of workloads
tcpepper: the openstack docu for this api and ordering talks about a desire to attach volumes for floppy and cd and disk and fallback disk
tcpepper: some serious legacy pet vm usage
rbradford: kristenc, we don't know you want to boot from it on the command line
tcpepper: which is bound to be coupled tightly to a bootloader config and bios assumptions
rbradford: kristenc, the boot_index is not helpful for htat.
kristenc: rbradford, again, I thought that was the point of boot_index=0
rbradford: kristenc, i originally wanted a.) but tcpepper persuaded me to b.)
tcpepper: do we need to add a boot_index to the workload def'n?
rbradford: kristenc, it's poorly defined, in ciao we could interpret it to mean that
tcpepper: and disallow conflicts of multiple 0's in the ordering?
rbradford: tcpepper, that wouldn't let you override
tcpepper: or is the workload's image implied to be 0 in the ordering?
rbradford: tcpepper, as all our workloads have bootable volumes
kristenc: rbradford, I'm ok with going with b if you all think it's the most intuitive - what do I know?
tcpepper: rbradford: that's coincidence imho.  it's how we've happened to define the current ones.  But...I don't see much reason for a workload w/o a bootable image defined.
rbradford: tcpepper, right, now maybe we're thinking about workloads wrong, maybe they should be instance templates
rbradford: and when you create an instance you "copy" that template
tcpepper: to me workload is an instance template, but a bit more
rbradford: but you can modify it trivially
tcpepper: it is the description of what will run
tcpepper: if there's no bootable image, there's not really anything there to run. and it's not much of a workload.
ciaomtgbot left the room (quit: Remote host closed the connection).
tcpepper: overriding on the command line feels like working around not having editable workloads, which we now have
kristenc: dammit.
rbradford: maybe we need a way to copy workloads to workloads trivially
kristenc: I think it didn't write out the logs because I forgot to do #endmeeting
kristenc: *sigh*
***tcpepper hasn't used it yet and has vacation foggy brain...
tcpepper: didn't kristenc's PR for workloads allow getting them down to a file locally, and uploading from a file to a new one?
kristenc: tcpepper, not yet.
tcpepper: minor extension then to copy and assign new uuid
kristenc: we have currently just list & create.
rbradford: even better
rbradford: instance to workload
kristenc: show is on my todo list.
tcpepper: yes!
rbradford: which would save your volume into a new workload
tcpepper: instance to workload would be high on my desired list
rbradford: so you can use your old disk.
rbradford: maybe that's actually the feature that we need.
tcpepper: if the main goal is to get at "your old disk", as in the snapshot of your mod's to the original workload image...yes that's what we need.
tcpepper: we still need to write up a page on the lifetime of a volume
tcpepper: "dude where's my data"
kristenc: ok - it seems like rbradford and tcpepper are in agreement on the way the usage of the command line instance add with volume should behave?
rbradford: yup
tcpepper: i think so
markusry: So the agreement is b?
kristenc: it would need to be documented in the ciao-cli I think.
kristenc: seems like it.
markusry: WHat happens to boot_index?  Do we keep it?
kristenc: we need to if the point of using this api is to be openstack compatible
rbradford: i know tcpepper spent a lot of time on it, but i'm querying whether we should have block_device_mapping_v2 at all
tcpepper: I question that also
rbradford: it's marked as optional tbh, appears to be full or bees.
rbradford: *of bees.
kristenc: me too. it was for openstack compatibility.
tcpepper: imho we need an ability to add ancillary user data disks and this api is a very sloppy way of it
rbradford: 'tis optional, we don't implement lots of the optional stuff in the same API.
markusry: And what does boot_index=0 mean in scenario b?
kristenc: it is nice to be able to attach a volume to a pre-defined workload at boot time, thats for sure.
markusry: Yes I agree.
kristenc: if we want to support that usage, we'd either need to use this api or create a new one.
tcpepper: markusry: I see us declaring that an image in a workload def'n is implicitly boot_index=0 and us refusing an instance add if that is specified on the cli also
markusry: Okay good.  Makes sense to me.
markusry: So this is not what we currently have right?
tcpepper: do we distinguish image vs volume in the workload storage currently?
kristenc: that makes since to me too.
markusry: Will we also ensure that boot_indicies are contiguous and don't have any gaps
markusry: ?
markusry: Maybe we already do
tcpepper: I don't see a need for contiguous.  just no duplicates.
tcpepper: but ^^ I specifically say "image in a workload def'n is implicitly boot_index=0" to distinguish from "volume".  I'm not sure if that makes sense to others.
tcpepper: I view images as read-only bootable volumes.
kristenc: me either - as if we were a bios boot menu.
rbradford: tcpepper, would adding boot_index to the workload storage clarify this?
markusry: And there are no plans to put the boot_index in the start payload?
tcpepper: rbradford: it could.  but I think life is simpler if the workload storage has an "image" and that is implicitly 0 in the order (documented)
jvillalo [jvillalo@nat/intel/x-ajexjrpkmhxluzja] entered the room.
tcpepper: that feels like what people want abstractly.  "Boot an instance this workload definition from this image" is a most base use case.
rbradford: i think if we put in there, and default it to zero that would be better
tcpepper: i don't see why you'd have a workload definition w/o an image
tcpepper: the more config knobs, the more user complexity
rbradford: tcpepper, you can have a workload with an existing volume
rbradford: tcpepper, in fact the image field in workload need to go way
rbradford: tcpepper, the image should be specified in the storage
markusry: It's not currently used by launcher.
tcpepper: does it make sense to have workload storage := 0 or 1 images, 0..N volumes?
rbradford: tcpepper, that's what it currently is.
tcpepper: I hadn't considered why you'd want 0 images.  But I can see it if you're going to boot from volume (ie: a r/w persistent instance-specific volume)
rbradford: tcpepper,  the pet case :-)
kristenc: yes, for boot from volume
tcpepper: ok so then I still think the 0 or 1 images would have very few meta fields.  but the volumes would need boot_index.
tcpepper: if the image and volume(s) are declared the same in the workload storage def'n, then they all have boot_index
***tcpepper didn't realize we really supported that pet case now actually
tcpepper: do we enforce only one instance of that workload?
kristenc: not sure what you mean by that. you can currently define an image, and the also workload storage, but the image is by default the bootable thing, and the storage is defined to be attached in that case.
tcpepper: or do multiple instances each run of a r/w snapshot of the r/w (but really unused?) volume?
rbradford: tcpepper, rbd won't let you
rbradford: tcpepper, so instance creation will fail
tcpepper: down at launcher launch time?
rbradford: yeh
tcpepper: annoyingly late, but ok..sufficient
rbradford: tcpepper, yeh, we could add our own ref counting...
kristenc: in practice it's ok, because it's a pet.
rbradford: any second creation would be accidental i'm sure.
tcpepper: yeah somebody should know what's up with their pet
kristenc: it simplifies code
***rbradford doesn't want to add volume refcounting to ciao
kristenc: I'd rather have the error case take longer than deal with ref counting.
tcpepper: I'm quite happy with that too
rbradford: we already duplicate too much state from rbd.
kristenc: good.
kristenc: yes.
rbradford: so what ARs did we get out of that?
rbradford: add boot_index to workload storage
rbradford: delete the image field from the workload definition
kristenc: 1) someone needs to update the cli usage to be specific about it's expected behavior.
tcpepper: document in the cli that volumes are additive to the workload def'n
tcpepper: take a stab at documenting usage of workloads
kristenc: as far as whether we are adding boot index, I don't know about that.
rbradford: okay, if we don't support it we shouldn't let the user put it on the command line
kristenc: do we allow them to specify more than one volume on the command line?
tcpepper: take a stab at documenting the lifetime of a volume
rbradford: and also error out in controller if they try to use it.
kristenc: boot order can be implied from the order it's put in.
tcpepper: add a 'copy workload' and 'workload from instance' cli
tcpepper: boot order actually can't well be implied from the order its put in
kristenc: why?
tcpepper: multiple instances of an argument iirc (vacation haze) come into a go map
tcpepper: lemme double check
tcpepper: maybe it's a slice
tcpepper: w/ consistent order
tcpepper: I feel like I saw my test invocations multiple -volume -volume data being ordered differently at times in memory than I typed on the cli
kristenc: ok - yes obviously if order is perserved that won't work.
kristenc: that needs confirming.
tcpepper: hmm..it is a slice
tcpepper: assuming flag.Parse is nice...the slice should match the cli order
tcpepper: needs further confirmation
kristenc: ok - pending inspection :)
kristenc: if it works, we don't need boot index on the command line.
kristenc: the api will still use it.
rbradford: we also need the invariant that any workload storage with an image type is boot_index zero.
kristenc: but will always treat workload image/volumes that are defined as boot index 0
kristenc: yes - if image id is set, it is always boot index 0
kristenc: if image id is "" and a volume is predefined, then it is always index 0
kristenc: anything on the command line is just appended.
***tcpepper agrees with ^^
markusry: I think we should be able to infer ordering from the command line options
tcpepper: I'm getting consistent order from a simplistic test right now
tcpepper: i'm willing to trust that it's a slice and go's going to append to the slice
kristenc: i also have vacation fog - there's a way to specify that the volume should be attached or used as a boot option?
tcpepper: kristenc: in the workload def'n?  or on the cli?
kristenc: on the cli for this extra volume we are adding.
kristenc: what if we just want it to be attached?
kristenc: and it's not bootable
tcpepper: I believe they're all just attached, in order.  and the the bios/bootloader/kernel get to decide what truly happens
kristenc: so if a volume is bootable, then it might be booted from.
tcpepper: yeah
kristenc: if all the other bootables fail.
tcpepper: same as a normal PC
kristenc: otherwise it'll just be attached.
kristenc: ok.
tcpepper: put a bunch of disks in and see what happens
markusry: One other issue is how the user can know the names of the corresponding devices in the VM
tcpepper: I played around with qemu and didn't get good results with efi images, but...in theory we could map boot_index 0/1/2/3 to -hda -hdb -hdc -hdd
tcpepper: I specifically skipped the optional parameter for device name
tcpepper: as I couldn't make it work
kristenc: isn't there in the api a way to specify the drive name?
kristenc: ah - ok.
markusry: Does having an explicit boot_index make is easier for the user to work out why the name should be
markusry: ?
markusry: kristenc: Yes.  That's what I was wondering
markusry: right now we don't use anything like this
tcpepper: maybe...boot_index really feels like it amounts to a "desired pci enumeration order"
kristenc: tcpepper, you couldn't make it work from qemu?
rbradford: i think we should put boot index in workload storage so that if you have a workload with multiple volumes you'll know what will happen
tcpepper: but that doesn't match actual boot, or device naming
markusry: I haven't investigated yet
markusry: Tell you what.
tcpepper: kristenc: correct.  I tried to get specific named devices for my attached volumes in and the qemu guest saw them differently than I was asking
markusry: Give me an AR and I'll see if I can figure out how to give the volumes proper names
tcpepper: I may have just done it wrong.  but this was simple qemu command line vm starting w/o ciao.
kristenc: ok, good idea.
kristenc: markusry can take a look and see if he can name volumes.
tcpepper: I don't think we should support that
tcpepper: if you assume your disk is called /dev/sda you will break
tcpepper: this is a fact unrelated to virtualization or cloud
markusry: tcpepper: How should a user now what device to mount inside the VM?
tcpepper: inside the running OS you need to add userspace level handling of persistent device aliases, assuming your devices actually have unique serial numbers that are discoverable.
tcpepper: markusry: disk label or partition label or serial number.
markusry: And maybe we can set this information somehow in qemu
tcpepper: maybe for serial number...
tcpepper: but the labels would be dangerous to touch imho
markusry: I'll look into this anyway
tcpepper: we shouldn't write the user's data disk contents
markusry: ANd maybe take a look to see what openstack and the other cloud providers do
kristenc: it'd be nice to know our choices.
tcpepper: i stopped looking at openstack when I saw stuff saying it depended on libvirt as was unreliable at runtime
kristenc: agree we shouldn't touch the user's disk.
tcpepper: markusry: the simple test I tried was:  run qemu by hand with a single disk and attempt to get the disk to show up as /dev/vdb instead of /dev/vda
tcpepper: or that was the simplest test I devolved to after trying more complex things
tcpepper: and failed at it all
rbradford: on aws, when you associate a volume with a VM it shows you what it will appear as before it starts
rbradford: .e.g /dev/sda
tcpepper: if you can accomplish ^^ you've got the control we'd need
markusry: Maybe as rob says we don't need controler
markusry: control
markusry: we just need to know what qemu has done
rbradford: aws does let you change the order
tcpepper: interesting.  I wonder how they do it.
markusry: We can control the order using pci addresses
rbradford: via the qemu command line
rbradford: :-)
tcpepper: they don't use qemu do they?
markusry: which is currently what launcher does
tcpepper: I thought they were xen?
markusry: The problem is that we need to use qmp as well and it doesn't have the same level of support as the command line does
tcpepper: and it's been ages since I looked but I thought you had to run their kernel?
rbradford: tcpepper, the acpi ordering gives the linux names
rbradford: and as markusry points out you can change that with qemu's pci addresses
kristenc: rbradford, tcpepper did I mention that you are the new gatekeepers? you are.
rbradford: or maybe it's not acpi with virtio
tcpepper: last I knew the kernel made no guarantees on consistent /dev name ordering relative to the hardware enumeration
***tcpepper reads about AWS hvm vs pv
tcpepper: well.  I don't believe you can truly control this on a PC.  and hvm should be equivalent.
tcpepper: I can accept that data will point at there being seeming consistent ordering.  but I think it's just lack of complete data.
tcpepper: we'll see
markusry: At the moment we rely on the PCI addresses for boot from volume to work
tcpepper: and maybe it's okay to behave as expected most of the time and react in any other case
kristenc: so at this moment, do we really only have two ARs from this discussion? to udpate the cli documentation? and mark to take a look at device naming?
markusry: We have to provide a PCI address for the rbd image otherwise the qemu instance doesn't boot
tcpepper: markusry: I read that as "we rely on luck", b/c it depends on how the bios/bootloader/kernel enumerate the busses
markusry: and the PCI address controls the ordering
markusry: The problem is with the APIs
markusry: We have no choice but to provide the PCI address and this affects the order
markusry: I have an open issue about this to see if I can figure out a better way
markusry: https://github.com/01org/ciao/issues/561
rbradford: if kernel bump changed the ordering, suddenly a lot of people would be happy
markusry: I'll look at this next as this is key to understanding the naming of devices
rbradford: *unhappy
tcpepper: rbradford: that's happened in the past
tcpepper: and linux's answer was manage your device persistent naming in userspace
rbradford: we just checked markusry's machine
tcpepper: I think if you grep "/dev" in /etc/fstab on a modern linux you will not see it
rbradford: his / is /dev/sda on ubunut
tcpepper: what's in the fstab?  and boot loader config?  I bet it's not /dev/sda
rbradford: but i think fedora does use the uuid based stuff.
markusry: I need to go but I'll take a look at 561 and bump up the priority.
markusry: It's been sitting there in the issues database for 4 months
markusry: It's time has come.
markusry: Its time has come
kristenc: I created https://github.com/01org/ciao/issues/987 and added it to my workloads project.
tcpepper: here's what I caught for AR's:
kristenc: fedora does - when I created a custom image for when I was designing the power lab I had to move to uuid as well.
tcpepper: add boot_index to workload storage
tcpepper: delete the image field from the workload definition
tcpepper: document in the cli that volumes are additive to the workload def'n
tcpepper: take a stab at documenting usage of workloads
tcpepper: take a stab at documenting the lifetime of a volume
tcpepper: add a 'copy workload' and 'workload from instance' cli
tcpepper: <eol>
kristenc: because the order kept changing on me depending on the bios.
tcpepper: my ubuntu vm here has fstab mounting / by uuid
tcpepper: happens currently to be /dev/vda1 though
markusry left the room (quit: Quit: This computer has gone to sleep).
kristenc: tcpepper, I don't think we need to delete the image field.
tcpepper: I spent quite a few years in the 00's doing enterprise storage and I'm pretty sure it's a well established practice to not count on a /dev/${FOO} name unless you've set up udev rules to insure you get the persistence you expect
kristenc: I don't think we agreed to add boot_index to workload storage
rbradford: kristenc, why do we want it?
kristenc: we said order would be sufficient.
rbradford: kristenc, it's unused
kristenc: rbradford, no it isn't.
rbradford: kristenc, what's it used for?
kristenc: to specify an image id.
rbradford: docker.
kristenc: from the image store.
rbradford: kristenc, no, they have workload storage of image type associated with them
rbradford: kristenc, the image field isn't used on VMs any longer
kristenc: maybe not in an launcher, but controller uses it.
kristenc: when it exists.
rbradford: see 9984d9bd1a6474951f104366c3d2ff77c39a5ea3
rbradford: ImageID is not used for VMs, only for docker
rbradford: so we can't delete it
kristenc: docker uses image name
kristenc: does not use ImageID
rbradford: then it's not used at all then
tcpepper: {fyi somebody at rackspace is chasing answers for me in #rackspace}
kristenc: ok - we can remove ImageID but retain ImageName.
kristenc: we need names on our ars. AR #1 update cli documents - tcpepper
rbradford: so i think we need boot_index on workload storage for when you have multiple volumes in your workload
rbradford: we can't preserve the ordering from the database, etc.
kristenc: AR #2 - take  a look at device naming mark
***tcpepper agrees w/ rbradford ^^
rbradford: so we need something to say what order they should be in (unless we have something else...like the UUID)
tcpepper: ordering on the cli only works if there's 0 or 1 in the workload def'n
rbradford: but we still need to know what the root
kristenc: ok
rbradford: this has been a great discussion, hopefully not too frustrating?
kristenc: we're going to have to file issues for these things.
tcpepper: i could see even having the workload def'n include a #2 and #3 specifically so a cli triggered recovery could insert a 0 or 1 ahead of the normally preferred one
kristenc: some of them will be a lot of work.
tcpepper: this has been exactly what I've hoped for for Thursday technical discussion!
kristenc: I will file issues for deleting the image id and adding the boot index - we will need to assign owners based on prioritizing with other work I guess.
tcpepper: that's the one thing I thought about on vacation
tcpepper: this is all stuff that's very important for meaningful usability
tcpepper: but is it the nearterm priority
kristenc: I added https://github.com/01org/ciao/issues/988
kristenc: and https://github.com/01org/ciao/issues/989
kristenc: tcpepper, this meeting went 1.5 hours longer than scheduled. I wonder if 1 hour is long enough for a meeting that is focussed on technical discussion?
kristenc: meaning - should we plan starting our thurs meeting at 8am and going till 10 am from now on?
tcpepper: it'll all depend on the discussion
tcpepper: we could be mean and stop conversation
tcpepper: there's always the next week
kristenc: it's mainly about allowing people to allocate time on their schedule to participate.
tcpepper: but I feel like a productive conversation/meeting should be allowed to flow
tcpepper: yeah..the predictability of it is important
kristenc: plus we don't want to keep the europeans from leaving for home.
rbradford: gotta fly
kristenc: well - let's keep it the same for now - see if we consistently need 2 hours or more.
kristenc: then change to start earlier if we do.
tcpepper: sounds like a good plan
tcpepper: I'm willing to shift earlier for start too if that helps the late euro finish time
tcpepper: I'm sure rbradford, markus, et al will let us know if it's bothersome
kristenc: me too - i think we only started it at 9am because Amy had a conflict at 8am previously before she developed a permanent conflict.
rbradford left the room (quit: Remote host closed the connection).
jvillalo_mobl left the room (quit: Quit: Leaving.).
tcpepper: jrperritt just followed me on twitter
kristenc: tcpepper, well, maybe you should tweet about whether gophercloud is dead and make some comment about it's maintainer :)
tcpepper: I think I have a productive poke in through #rackspace...I'll ride that a bit ;)
jvillalo_mobl [[email protected]] entered the room.

Clone this wiki locally