forked from open-mpi/ompi
-
Notifications
You must be signed in to change notification settings - Fork 1
WeeklyTelcon_20160628
Jeff Squyres edited this page Nov 18, 2016
·
1 revision
- Dialup Info: (Do not post to public mailing list or public wiki)
- (from memory this week, may not be accurate)
- Geoff Paulsen
- Jeff Squyres
- Howard
- Natahn Hjelm
- Josh Hursey
- Joshua Ladd
- Artim
- Ralph
- Sylvain Jeaugey
- Milestones: https://github.com/open-mpi/ompi-release/milestones/v1.10.3
- 1.10.3 has been released June 15th.
- Would like to start transitioning folks to use 2.0.0 as soon as we release.
-
Blocker Issues: https://github.com/open-mpi/ompi/issues?utf8=%E2%9C%93&q=is%3Aopen+milestone%3Av2.0.0+label%3Ablocker
-
Milestones: https://github.com/open-mpi/ompi-release/milestones/v2.0.0
-
Master PR1817 - does this need to go to 2.0.0 or 2.0.1?
- Master PR that Pasha @ Mellanox says needs to go to v2.x
- need more info about reproducer of bug this fixes. Maybe UCX / Add procs non-default issue?
-
v2.x PR1246 - cherry pick of MPI_Request Multi-thread race condition. Hope to get in tonight.
-
Last week still waiting on:
- PR1237 on ompi-release - PR1794 and PR1795
- and Jenkin's failure on Mellanox cluster.
- thread_test 1.1 - running overlap test. Not great, but good at finding bugs.
-
PR1821 - additional performance improvements for MPI_Waitsome() Not regression.
- George and Nathan need to look at.
-
If we get PR1821 and PR1246 in today and Jenkins runs okay:
- Roll RC4 tomorrow, and diagnose Mellanox issue.
- if we could release next Tuesday, that would be fantastic!
- Andrew Lumsdaine, our primary faculty sponsor for Open MPI at Indiana University, is leaving IU.
- We have a bunch of Open MPI infrastructure hosted at IU (for free): the web site, MTT web reporter + database, ...etc. This infrastructure is now likely to go away within the next several months.
- There are many implications of such a move: cost, features, resources, timing, ...etc. We need to start talking about this as a community to decide: a) where to move OMPI's electronic infrastructure, and b) how to move it (there are complicated technical issues involved).
- Ralph, Howard, and I have been looking at alternatives over the past few days. We would like to present a few ideas / proposals to everyone.
- We have approximately 3 month deadline to find new home.
- web services at IU (website, mailinglist archives, nightly tarballs).
- MTT web + MYSQL needed also.
- Building of nightly tarballs, and release tarballs.
- Github automation - bots, other minor. Some is webbased, some is script / cron based.
- Jenkins would go away.
- Jeff and Howard has Google doc, will transition to Wiki
- IU also is holder of legal docs. All data is now electronic, so should be okay to archive in a community space.
- Need to consider how to pay for services we were getting from IU for free.
- Member dues, or Face to face registration costs or something.
- Summer Meeting: (https://github.com/open-mpi/ompi/wiki/Summer-2016%2CTBD)
- Decided to meet August 16-19 at IBM site in Dallas.
Review Master MTT testing (https://mtt.open-mpi.org/)
- Cisco, ORNL, UTK, NVIDIA
- Mellanox, Sandia, Intel
- LANL, Houston, IBM