{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":4339773,"defaultBranch":"develop","name":"htslib","ownerLogin":"samtools","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2012-05-15T19:34:48.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1518450?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1713194412.0","currentOid":""},"activityList":{"items":[{"before":"6b0a9e97da8a0772e012379b01296685b54eaa5d","after":"30c9c50a874059e3dae7ff8c0ad9e8a9258031c8","ref":"refs/heads/develop","pushedAt":"2024-05-23T09:10:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Warn if bgzf_getline() returned apparently UTF-16-encoded text\n\nText files badly transferred from Windows may occasionally be\nUTF-16-encoded, and this may not be easily noticed by the user.\nHTSlib should not accept such encoding (as other tools surely don't,\nhence doing so would cause interoperability problems), but it should\nideally emit a warning or error message identifying the problem.\n\nReading text from a htsFile/samFile/vcfFile will already have failed\nwith EFTYPE/ENOEXEC if the text file is UTF-16-encoded, as the encoding\nwill not have been recognised by hts_detect_format().\n\nOTOH bgzf_getline() will return a UTF-16-encoded text line. Add a\nsuitable context-dependent diagnostic to the BGZF-based bgzf_getline()\ncalls in HTSlib: in hts_readlist()/hts_readlines(), emit a warning\n(once, on the first line); in tbx.c, emit a more specific error message\nif get_intv() parsing failure is due to UTF-16 encoding.\n\n[TODO] If utf16_text_format were added to htsFormatCategory,\nthe new is_utf16_text() function is suitable for detecting it.","shortMessageHtmlLink":"Warn if bgzf_getline() returned apparently UTF-16-encoded text"}},{"before":"b204d55c88008ee2b1ef1267e30efa99842e0277","after":"6b0a9e97da8a0772e012379b01296685b54eaa5d","ref":"refs/heads/develop","pushedAt":"2024-05-20T13:17:02.000Z","pushType":"pr_merge","commitsCount":4,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Update htscodecs submodule to commit 5a2627e\n\n * Avoid typedef disabled by setting _XOPEN_SOURCE","shortMessageHtmlLink":"Update htscodecs submodule to commit 5a2627e"}},{"before":"9ad8270bd30d7b2d0bb20fa0b533a7abedd1cac7","after":"b204d55c88008ee2b1ef1267e30efa99842e0277","ref":"refs/heads/develop","pushedAt":"2024-05-09T16:11:02.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Make sure `-h -1` does not loose the first line","shortMessageHtmlLink":"Make sure -h -1 does not loose the first line"}},{"before":"292a35d7c5181c02521aa3bb7dbd5891f1696967","after":"9ad8270bd30d7b2d0bb20fa0b533a7abedd1cac7","ref":"refs/heads/develop","pushedAt":"2024-05-09T15:32:48.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Use only _regions_add() when adding the list of contig names\n\nDon't use _regions_init_string(), which misinterprets contig names\ncontaining colons as region specification strings. The code used\n_regions_init_string() rather than _regions_add() only when needed\nto allocate a new bcf_sr_regions_t structure; instead extract basic\ninitialisation into a new bcf_sr_regions_alloc() function, which as\na bonus checks the memory allocation. Use the new function throughout.\n\nFixes samtools/bcftools#2179.","shortMessageHtmlLink":"Use only _regions_add() when adding the list of contig names"}},{"before":"7576aca19938147dda7688ab685be4d7e5a0cd35","after":"292a35d7c5181c02521aa3bb7dbd5891f1696967","ref":"refs/heads/develop","pushedAt":"2024-05-09T15:13:26.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Improve compiler compatibility of nibble2base_ssse3","shortMessageHtmlLink":"Improve compiler compatibility of nibble2base_ssse3"}},{"before":"9a99a1d574a0438d7f4e8a81e60b315f653f4b68","after":"7576aca19938147dda7688ab685be4d7e5a0cd35","ref":"refs/heads/develop","pushedAt":"2024-05-03T15:50:48.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Provide extra CRAM container manipulations and index queries.\n\nAdded to support extra functionality to `samtools cat`.\n\n- Some internal cram functions are no longer static as they're called\n from cram_external.c, but they don't have HTSLIB_EXPORT and aren't\n an official part of the API.\n These are cram_to_bam, cram_next_slice\n\n- New public CRAM APIs:\n These facilitate manipulation at the container level, both seeking\n to specific byte offsets, but also being able to specify containers\n as the n^th container listed in the index.\n\n cram_container_get_coords returns refid, start and span fields from\n the opaque cram_container struct.\n\n cram_filter_container copies a container but applies region based\n filtering, as already specified in the cram_fd with a range request.\n (Note we currently also provide cram_copy_slice, but may want to add\n a cram_copy_container for consistency.)\n\n cram_index_extents queries an index to return byte offsets of the\n first and last container overlapping a specified region.\n\n cram_num_containers_between queries an index to report the number of\n indexed containers and their container numbers (starting at 0 for\n the first) covering a range.\n\n cram_num_containers is a simplified cram_num_containers_between\n doing only the counting operation and on the entire file.\n\n cram_container_num2offset returns the byte offset for the n^th\n container. cram_container_offset2num does the reverse.\n\n- A new cram_skip_container function, which is currently internal only\n but may potentially have use externally in the future. It's used by\n cram_filter_container when it detects it'll filter out everything.\n\n- cram_index_query now copes with HTS_IDX_NOCOOR (-2) and maps it\n over to refid -1.","shortMessageHtmlLink":"Provide extra CRAM container manipulations and index queries."}},{"before":"1e7efc0b9fb2472453dc22ccf30f57a6818d8585","after":"9a99a1d574a0438d7f4e8a81e60b315f653f4b68","ref":"refs/heads/develop","pushedAt":"2024-05-02T13:37:05.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Check interval start to avoid overflowing bin numbers\n\nCheck start positions of query intervals against the maximum position\nrepresentable in the index's geometry, to avoid negative bin numbers\nand the resulting infinite loops in the do...while loop.\n\nIntroduce hts_bin_maxpos() and hts_idx_maxpos(), and use them wherever the\nmaxpos calculation appears. (Leave the latter private, at least for now.)\n\nAlso change the existing end checks to <= as end is exclusive -- note it\nis used as end-1 in the code guarded by the checks.","shortMessageHtmlLink":"Check interval start to avoid overflowing bin numbers"}},{"before":"c93f5a57e63bc594a291b145407f1d8fcbef59bd","after":"1e7efc0b9fb2472453dc22ccf30f57a6818d8585","ref":"refs/heads/develop","pushedAt":"2024-05-02T08:28:57.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"fix fuzz integer overflow in cram encoder.\n\nInput files with very long CIGAR strings and consensus generated\nembedded reference can lead to exceptionally long CRAM blocks which\noverflow the check for large size fluctuations (to trigger new\ncompression metric assessments).\n\nReformulated the expression to avoid scaling up values.\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 68225","shortMessageHtmlLink":"fix fuzz integer overflow in cram encoder."}},{"before":"deeb9f01376ca9416315e4c9f5fe489e6f03e05f","after":"c93f5a57e63bc594a291b145407f1d8fcbef59bd","ref":"refs/heads/develop","pushedAt":"2024-04-30T11:11:03.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Update htscodecs version to fix compiler void pedantry.","shortMessageHtmlLink":"Update htscodecs version to fix compiler void pedantry."}},{"before":"6a7d33abc6cae840023868ccdd946d0d8759f259","after":"0cadce238af0c6398751999bad703d4b19615860","ref":"refs/heads/master","pushedAt":"2024-04-15T15:20:12.000Z","pushType":"push","commitsCount":34,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Release 1.20","shortMessageHtmlLink":"Release 1.20"}},{"before":"a67b53c2a2d2fe54be93362d3f8f250378b9dda3","after":"deeb9f01376ca9416315e4c9f5fe489e6f03e05f","ref":"refs/heads/develop","pushedAt":"2024-04-15T15:20:12.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Merge version number bump and NEWS file from master","shortMessageHtmlLink":"Merge version number bump and NEWS file from master"}},{"before":"1cdc7984f3f14c2c797c5af654a5e0b5667c4ec6","after":"a67b53c2a2d2fe54be93362d3f8f250378b9dda3","ref":"refs/heads/develop","pushedAt":"2024-04-12T09:15:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Further recommend use of libdeflate and list OS packages","shortMessageHtmlLink":"Further recommend use of libdeflate and list OS packages"}},{"before":"0cc34b3dcc869d3d6474460a51175cc371204e69","after":"1cdc7984f3f14c2c797c5af654a5e0b5667c4ec6","ref":"refs/heads/develop","pushedAt":"2024-04-11T13:45:59.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Add NEWS ready for 1.20 release","shortMessageHtmlLink":"Add NEWS ready for 1.20 release"}},{"before":"c1247f9e7eb2a32291cb375e90d303a0ee9dcf73","after":"0cc34b3dcc869d3d6474460a51175cc371204e69","ref":"refs/heads/develop","pushedAt":"2024-04-05T15:38:01.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Spring 2024 copyright update.","shortMessageHtmlLink":"Spring 2024 copyright update."}},{"before":"3cfe87690d047c06ec0d29a859c930b635a42e96","after":"c1247f9e7eb2a32291cb375e90d303a0ee9dcf73","ref":"refs/heads/develop","pushedAt":"2024-03-27T11:13:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Ensure S3 redirects use TLS\n\nWhen following a 3xx redirection from AWS,\nredirect_endpoint_callback() wasn't putting 'https://' on the\nnew URL. The redirection worked, but dropped back to http.\nFix by prepending 'https://' to ensure it uses TLS, and\nadding a bit of error checking to ensure all parts of the\nnew url have been included.","shortMessageHtmlLink":"Ensure S3 redirects use TLS"}},{"before":"78e507dbd8a0567c7f3c8c1e265d36218e3f0e77","after":"3cfe87690d047c06ec0d29a859c930b635a42e96","ref":"refs/heads/develop","pushedAt":"2024-03-26T15:05:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Drop duplicate tbx_conf_t","shortMessageHtmlLink":"Drop duplicate tbx_conf_t"}},{"before":"6ea61bfe531edd387ee01ca91b049845ac0d841d","after":"78e507dbd8a0567c7f3c8c1e265d36218e3f0e77","ref":"refs/heads/develop","pushedAt":"2024-03-21T12:14:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Ensure hFILE_scheme_handler open and isremote are set\n\nCheck that hfile plugins have supplied open() and\nisremote() functions in their hFILE_scheme_handler struct,\nand refuse to add them if not. Failing to check this\ncould lead to an attempt to call a NULL pointer when\nthe interfaces are used.\n\nFix up the \"crypt4gh-needed\" scheme handler, which did not\nsupply isremote(); and \"mem\" which failed to supply open().\n\nThanks to John Marshall for suggested validation code\nin hfile_add_scheme_handler().\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 67349","shortMessageHtmlLink":"Ensure hFILE_scheme_handler open and isremote are set"}},{"before":"ca0f6214b94adf9278cbcaaefd50f5fe9455f9ad","after":"6ea61bfe531edd387ee01ca91b049845ac0d841d","ref":"refs/heads/develop","pushedAt":"2024-03-21T11:15:38.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pd3","name":"Petr Danecek","path":"/pd3","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/524074?s=80&v=4"},"commit":{"message":"Fix duplicated but missing FORMAT bug\n\n7ce510c added code to drop duplicate FORMAT tags, but missed the\ncase where the duplicated entry was off the end of the per-sample\ndata, so it hit the part that put in MISSING values. This lead\nto an attempt to call memset() with a negative size. Fixed by\nadding code to skip the duplicated FORMAT tag.\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 67431","shortMessageHtmlLink":"Fix duplicated but missing FORMAT bug"}},{"before":"55cafdc9434f3141019cda7274c7a930a4ddd361","after":"ca0f6214b94adf9278cbcaaefd50f5fe9455f9ad","ref":"refs/heads/develop","pushedAt":"2024-03-15T09:23:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"[minor] Change error to warning.\n\nIt looks like it was meant to be a warning rather than an error message.","shortMessageHtmlLink":"[minor] Change error to warning."}},{"before":"7d3efee742cd13a5b23c057ee29a71a51c6f94a6","after":"55cafdc9434f3141019cda7274c7a930a4ddd361","ref":"refs/heads/develop","pushedAt":"2024-03-14T18:08:03.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Adjust the expected output for malformed VCF","shortMessageHtmlLink":"Adjust the expected output for malformed VCF"}},{"before":"3e54663232dd3ce80eae44de2093a2f34ff901de","after":"7d3efee742cd13a5b23c057ee29a71a51c6f94a6","ref":"refs/heads/develop","pushedAt":"2024-03-07T16:48:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Fix S3 virtual path and add redirect capability (PR #1756)\n\nCurrently only s3 pathing works with a redirect;\r\nWith this fix s3 virtual gets a virtual redirect url;\r\nadditionally, performs an extra redirect if necessary.","shortMessageHtmlLink":"Fix S3 virtual path and add redirect capability (PR #1756)"}},{"before":"255dfcbfa2cfbb8fcb4735b7c3bee5744c30b3f7","after":"3e54663232dd3ce80eae44de2093a2f34ff901de","ref":"refs/heads/develop","pushedAt":"2024-03-07T16:12:54.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Delay closing the index file when indexing on-the-fly\n\nThis is to ensure the timestamp on the index file is later than the\none on the file being indexed, preventing spurious \"The index file\nis older than the data file\" messages when it's used. The delay\nis necessary because the main file EOF block may not have been\nwritten when hts_idx_save_as() has been called.\n\nReworks the idx_save functions to add one that keeps the index\nhandle open, storing it in the hts_idx_t struct. hts_close()\nchecks for this, and closes the index file if it finds one\nafter having closed the file is was passed. Unfortunately this\nmeans hts_close() will report any errors that happen when the\nindex file is closed. To reduce the chance of that happening,\nthe index writer calls bgzf_flush() to reduce the amount of\nwork that the final bgzf_close() on the index has to do.\n\nAn unfortunate wrinkle is that to set the timestamp on the\nindex file, we need to ensure some data is written just before\nthe file is closed. This is find for CSI indexes as they're\nBGZF compressed and we write an EOF block. For uncompressed\nBAI indexes, we instead use an ugly hack of keeping the last\nfew bytes back until we want to close the file. This is\nhorrible, but I can't think of a better way to get the result\nwe want.\n\nFinally, it turned out that calling bgzf_flush() when the\nfile has been opened in uncompressed mode (\"u\") crashed\ndue to a NULL pointer dereference. It now more usefully\nflushes the underlying file.","shortMessageHtmlLink":"Delay closing the index file when indexing on-the-fly"}},{"before":"5d2c3f721d78906486f1759b8cda87649a14c684","after":"255dfcbfa2cfbb8fcb4735b7c3bee5744c30b3f7","ref":"refs/heads/develop","pushedAt":"2024-03-07T14:52:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"added thread pool to tabix operations","shortMessageHtmlLink":"added thread pool to tabix operations"}},{"before":"6d0dd0025811744668ad82ec5f8bec6bf151f16e","after":"5d2c3f721d78906486f1759b8cda87649a14c684","ref":"refs/heads/develop","pushedAt":"2024-02-28T19:56:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"yfarjoun","name":"Yossi Farjoun","path":"/yfarjoun","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2745554?s=80&v=4"},"commit":{"message":"Update bgzip.1","shortMessageHtmlLink":"Update bgzip.1"}},{"before":"3e11d0e335bfdf34db3ba5f61d52d1d19a60bdfe","after":"6d0dd0025811744668ad82ec5f8bec6bf151f16e","ref":"refs/heads/develop","pushedAt":"2024-02-27T16:45:11.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Extend pileup test to include quality values.\n\nWithout this we cannot test that the overlap removal code is working,\nwhich operates by zeroing quality values.","shortMessageHtmlLink":"Extend pileup test to include quality values."}},{"before":"7db7e8371d5230bf222d55852880001979ae8e93","after":"3e11d0e335bfdf34db3ba5f61d52d1d19a60bdfe","ref":"refs/heads/develop","pushedAt":"2024-02-23T12:00:55.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Bug fix the recent bam_plp_destroy memory leak removal\n\nThe pileup interface maintains a linked list of lbnode_t pointers.\nThese start at iter->head, chain through lbnode->next, and then end up\nat iter->tail. We also have a separate linked list in iter->mp of\nitems we've previously freed, so we don't have to free and malloc\ncontinually.\n\nbam_plp64_next adds and removes items to these linked lists, and it\ncalls iter->plp_destruct when it puts items into the free list. So\nfar so good, and so correct.\n\nHowever if for whatever reason we bail out of the pileup interface\nearly, before we've punted all records onto the iter->mp free list,\nthen we weren't calling plp_destruct on the current \"in flight\" data.\nThis caused a memory leak, fixed in d028e0d.\n\nUnfortunately there is a subtlety I didn't notice at the time. The\nin-flight linked list goes from iter->head to *one before*\niter->tail. The tail is simply a dummy node and unused by the code.\nI don't understand why it has to work this way, but presumably someone\ndidn't want iter->head and iter->tail to ever point to the same item.\n\nThe bam_plp_destroy function however has to move all these items to\nthe iter->mp free list, so here it goes from iter->head to iter->tail\ninclusively. This commit avoids attempting to call the destructor on\nthe tail, which could be a previously freed item that was pulled back\noff the iter->mp list, leading to double frees.","shortMessageHtmlLink":"Bug fix the recent bam_plp_destroy memory leak removal"}},{"before":"fdbafae580257734080fd6b91b9d73839a959b06","after":"7db7e8371d5230bf222d55852880001979ae8e93","ref":"refs/heads/develop","pushedAt":"2024-02-23T09:36:39.000Z","pushType":"pr_merge","commitsCount":4,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Call the pileup destructor in bam_plp_destroy.\n\nThis frees memory when destroying earlier than expected, such as\nduring a processing failure.\n\nI can't figure out how this has been missed all these years!","shortMessageHtmlLink":"Call the pileup destructor in bam_plp_destroy."}},{"before":"98c2667326a15a81e476c188aa15a017ae76a921","after":"fdbafae580257734080fd6b91b9d73839a959b06","ref":"refs/heads/develop","pushedAt":"2024-02-19T16:11:30.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Further speed up sam_parse_B_vals.\n\nPreviously it parses the B string twice. Once to count commas and\nallocate memory and once to fill out the memory.\n\nNow the code reallocates periodically and thus only needs a single\npass. The effect on large B arrays is significant.\n\n 2-pass 1-pass develop\n gcc7 -O2: 5565824443 3299046885\n gcc7 -O3: 5779736469 3400756477\n gcc13 -O2: 5565893109 3086808341\n gcc13 -O3: 5392426978 3346007015 9724589000\n clang10 -O2: 5344657729 3465765165\n clang10 -O3: 5348030140 3460058513\n clang16 -O2: 4563321159 3374951558\n clang16 -O3: 4575986193 3311061338 6398268577\n\nSpeed instability was still observed by modifying code elsewhere so\nthis has been improved by splitting up the function and adding\nfunction alignment requests. We could achieve a similar result by\ncompiler options such as -falign-loops=32, but this affects all code\nand we have not evaluated the impact elsewhere.","shortMessageHtmlLink":"Further speed up sam_parse_B_vals."}},{"before":"f19b844b45e5e9dc6050d28fba02aba99edecb8f","after":"98c2667326a15a81e476c188aa15a017ae76a921","ref":"refs/heads/develop","pushedAt":"2024-02-19T15:20:33.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Add check for .gzi when determining whether to rebuild the fai index.\n\nFixes #1744","shortMessageHtmlLink":"Add check for .gzi when determining whether to rebuild the fai index."}},{"before":"34031e91070843a33a18002dfcb09562232f675f","after":"f19b844b45e5e9dc6050d28fba02aba99edecb8f","ref":"refs/heads/develop","pushedAt":"2024-02-19T14:50:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Added tests and changes to make the test work.","shortMessageHtmlLink":"Added tests and changes to make the test work."}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEUc1l9wA","startCursor":null,"endCursor":null}},"title":"Activity ยท samtools/htslib"}