{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":506845985,"defaultBranch":"master","name":"pytorch","ownerLogin":"izaitsevfb","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2022-06-24T01:53:45.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/108101595?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1705951487.0","currentOid":""},"activityList":{"items":[{"before":null,"after":"6b7fd159457028e3f1f363d4376356f01839f355","ref":"refs/heads/export-D52582853","pushedAt":"2024-01-22T19:24:47.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[codemod][highrisk] Fix shadowed variable in caffe2/caffe2/onnx/onnx_exporter.cc\n\nSummary:\nOur upcoming compiler upgrade will require us not to have shadowed variables. Such variables have a _high_ bug rate and reduce readability, so we would like to avoid them even if the compiler was not forcing us to do so.\n\nThis codemod attempts to fix an instance of a shadowed variable. Please review with care: if it's failed the result will be a silent bug.\n\n**What's a shadowed variable?**\n\nShadowed variables are variables in an inner scope with the same name as another variable in an outer scope. Having the same name for both variables might be semantically correct, but it can make the code confusing to read! It can also hide subtle bugs.\n\nThis diff fixes such an issue by renaming the variable.\n\n - If you approve of this diff, please use the \"Accept & Ship\" button :-)\n\nTest Plan: Sandcastle\n\nReviewed By: igorsugak\n\nDifferential Revision: D52582853","shortMessageHtmlLink":"[codemod][highrisk] Fix shadowed variable in caffe2/caffe2/onnx/onnx_…"}},{"before":"31459e3e56b619600f431b6814dfbadbe7bb8cad","after":"59ea77dba72accfa581423c580048b697789e89d","ref":"refs/heads/test-forked-pr-approvals","pushedAt":"2024-01-11T23:35:54.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"dummy change to test PR approval","shortMessageHtmlLink":"dummy change to test PR approval"}},{"before":null,"after":"31459e3e56b619600f431b6814dfbadbe7bb8cad","ref":"refs/heads/test-forked-pr-approvals","pushedAt":"2024-01-11T23:35:19.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[ONNX][dynamo_export] Add 'aten::rsub' type promotion (#113697)\n\nThe logic is the same as 'aten::sub'. Needed by llama2.\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/113697\nApproved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi\nghstack dependencies: #113404","shortMessageHtmlLink":"[ONNX][dynamo_export] Add 'aten::rsub' type promotion (pytorch#113697)"}},{"before":"03c39cafc17859d316ff54a1f87493c0b728ea1f","after":"32cab02bd61451a6033eac73d812ea76d00fc484","ref":"refs/heads/export-D52482968","pushedAt":"2024-01-02T17:28:10.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"Fix implicit conversion to double (#116614)\n\nSummary:\nPull Request resolved: https://github.com/pytorch/pytorch/pull/116614\n\nForward fix for https://github.com/pytorch/pytorch/pull/116185 / D52390113\n\nError:\n```\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:602:23: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] std::ceil(num_elements / static_cast(_max_load_factor))));\n[CONTEXT] ^~~~~~~~~~~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:923:22: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] num_elements + 1 >\n[CONTEXT] ~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:924:34: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] (num_slots_minus_one + 1) * static_cast(_max_load_factor)) {\n[CONTEXT] ~~~~~~~~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:923:22: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] num_elements + 1 >\n[CONTEXT] ~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:924:34: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] (num_slots_minus_one + 1) * static_cast(_max_load_factor)) {\n[CONTEXT] ~~~~~~~~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:923:22: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] num_elements + 1 >\n[CONTEXT] ~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:924:34: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] (num_slots_minus_one + 1) * static_cast(_max_load_factor)) {\n```\n\nFixed by casting int parts to double explicitly.\n\nTest Plan:\nSC\nhttps://www.internalfb.com/sandcastle/job/18014399657596484/\n\nReviewed By: jeanschmidt\n\nDifferential Revision: D52482968\n\nfbshipit-source-id: 7e1f14ed3e415acb1a0cda376a08b5d67e551af0","shortMessageHtmlLink":"Fix implicit conversion to double (pytorch#116614)"}},{"before":null,"after":"03c39cafc17859d316ff54a1f87493c0b728ea1f","ref":"refs/heads/export-D52482968","pushedAt":"2024-01-02T16:07:23.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"Fix implicit conversion to double\n\nSummary:\nForward fix for https://github.com/pytorch/pytorch/pull/116185 / D52390113\n\nError:\n```\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:602:23: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] std::ceil(num_elements / static_cast(_max_load_factor))));\n[CONTEXT] ^~~~~~~~~~~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:923:22: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] num_elements + 1 >\n[CONTEXT] ~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:924:34: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] (num_slots_minus_one + 1) * static_cast(_max_load_factor)) {\n[CONTEXT] ~~~~~~~~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:923:22: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] num_elements + 1 >\n[CONTEXT] ~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:924:34: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] (num_slots_minus_one + 1) * static_cast(_max_load_factor)) {\n[CONTEXT] ~~~~~~~~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:923:22: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] num_elements + 1 >\n[CONTEXT] ~~~~~~~~~~~~~^~~ ~\nxplat/caffe2/c10/util/order_preserving_flat_hash_map.h:924:34: error: implicit conversion from 'uint64_t' (aka 'unsigned long long') to 'double' may lose precision [-Werror,-Wimplicit-int-float-conversion]\n[CONTEXT] (num_slots_minus_one + 1) * static_cast(_max_load_factor)) {\n```\n\nFixed by casting int parts to double explicitly.\n\nTest Plan: SC\n\nDifferential Revision: D52482968\n\nfbshipit-source-id: e9c2c10f55ebfc58f76eccd94d3fc9aee846a037","shortMessageHtmlLink":"Fix implicit conversion to double"}},{"before":null,"after":"b234c592dc73ad2253c8beefbd4961e5e05c5937","ref":"refs/heads/export-D49109231","pushedAt":"2023-09-11T23:45:52.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[TEST][pytorch] Use cpuinfo to determine c10::ThreadPool thread number + internal patch\n\nSummary: Testing https://github.com/pytorch/pytorch/pull/107339 combined with internal patches.\n\nDifferential Revision: D49109231","shortMessageHtmlLink":"[TEST][pytorch] Use cpuinfo to determine c10::ThreadPool thread numbe…"}},{"before":null,"after":"20ecf5e1a9a430a77a351a140454f65aa6849911","ref":"refs/heads/export-D48132255","pushedAt":"2023-08-08T00:59:08.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[ez][inductor][fx pass] strengthen numerical check for batch fusion\n\nSummary:\nAs title.\nFor batch fusion, we use torch op to fuse and the result should be exactly same as the original ones.\npull request: https://github.com/pytorch/pytorch/pull/106731#issuecomment-1668662078\n\nTest Plan:\n```\nbuck test mode/dev-nosan //caffe2/test/inductor:group_batch_fusion\nFile changed: fbcode//caffe2/test/inductor/test_group_batch_fusion.py\nFile changed: fbsource//xplat/caffe2/test/inductor/test_group_batch_fusion.py\nBuck UI: https://www.internalfb.com/buck2/cf14a2dd-faee-417a-8d26-0b9326c944e4\nTest UI: https://www.internalfb.com/intern/testinfra/testrun/6755399617159540\nNetwork: Up: 0B Down: 0B\nJobs completed: 12. Time elapsed: 2:55.5s.\nTests finished: Pass 4. Fail 0. Fatal 0. Skip 0. Build failure 0\n```\n\nReviewed By: dshi7\n\nDifferential Revision: D48132255\n\nfbshipit-source-id: d244451357ebe17a3d0749607030c848572916c6","shortMessageHtmlLink":"[ez][inductor][fx pass] strengthen numerical check for batch fusion"}},{"before":null,"after":"2886d7ee605c182d0e2170996b8c246b565fdd7f","ref":"refs/heads/export-D46506433","pushedAt":"2023-06-07T01:22:37.235Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"Back out \"Remove `check` from `_prims_common`, replace with `torch._check*` (#102219)\", Back out \"Forwatd fix for D46427687\"\n\nTest Plan: revertitparrot\n\nReviewed By: malfet\n\nDifferential Revision: D46506433\n\nfbshipit-source-id: 45a2e3de8c729c1646a8e0b5863de92ba6492465","shortMessageHtmlLink":"Back out \"Remove check from _prims_common, replace with `torch._c…"}},{"before":"edebe413d3ab38a74ba00bcbcb80b1a36a9eeb67","after":"fa077377ea7703c27f2ecb0b01d0b16aa1cc9134","ref":"refs/heads/master","pushedAt":"2023-04-11T01:38:10.000Z","pushType":"push","commitsCount":130,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[PtE][CoreML] Create modelID as value not reference (#98655)\n\nSummary:\nhttps://www.internalfb.com/logview/details/instagram_ios_crashes/d5fd49a99f3ee21a82b66861de797711\n\nCoreML is crashing in torch::jit::mobile::coreml::CoreMLBackend::compile(c10::IValue, c10::Dict) (PTMCoreMLBackend.mm<175>)\n\nThis is related to the crash here https://www.internalfb.com/logview/details/instagram_ios_crashes/a8a317c8da13cd577529e1763364f496/?trace_key=8002f84f5ea00ac68b0dfb91878c754a&selected-logview-tab=shared\n\nkimishpatel's original fix here D44386623 by passing modelID by value instead of reference, however I believe it just moved the error to loadModel invocation.\n\nWhen we create a copy of modelID on loadModel invocation, it is a reference to the string within the preprocessed IValue payload. When the payload is deallocated, modelID is no longer valid and the dispatched thread still tries to use it causing the error\n\nTest Plan:\n```\nRunning with tpx session id: 2a77b7b1-7594-4479-8ac3-c01db29cf5cc\nTrace available for this run at /tmp/tpx-20230407-173155.849234-2a77b7b1-7594-4479-8ac3-c01db29cf5cc/trace.log\nRemoteExecution session id: reSessionID-2a77b7b1-7594-4479-8ac3-c01db29cf5cc-tpx\nI0407 17:31:55.970502 780835 ConfigeratorDomainConfigs.cpp:177] Notify user with updated size: 92 removed size: 0\nStarted reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/1970325002807752\n ✓ ListingSuccess: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests : 13 tests discovered (0.177)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchBITests/testBITextModel (0.028)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchBITests/testBIXRayModel (0.167)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCPUBlasTests/testGemmComplexDouble (0.001)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCPUBlasTests/testGemmComplexFloat (0.001)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCPUBlasTests/testGemmDouble (0.001)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCPUBlasTests/testGemmFloat (0.001)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCoreMLTests/testGanModel (0.303)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCoreMLTests/testMCSModel (0.395)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCoreMLTests/testMCSModelInvalidInputShape (0.305)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchCoreMLTests/testXirpModel (0.110)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchDynamicPyTorchTests/testDynamicPytorchFamFlDictModel (0.014)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchDynamicPyTorchTests/testDynamicPytorchFamFlModel (0.005)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - PyTorchDynamicPyTorchTests/testDynamicPyTorchXirpModel (0.065)\n ✓ Pass: //fbobjc/Apps/Internal/PyTorchPlayground:PyTorchPlaygroundTests - main (13.177)\n```\n\nDifferential Revision: D44808433\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/98655\nApproved by: https://github.com/SS-JIA, https://github.com/tiandiao123, https://github.com/kirklandsign","shortMessageHtmlLink":"[PtE][CoreML] Create modelID as value not reference (pytorch#98655)"}},{"before":"2bca64ae2856b1ad7353648f486a79fa1909cb7a","after":"edebe413d3ab38a74ba00bcbcb80b1a36a9eeb67","ref":"refs/heads/master","pushedAt":"2023-04-06T20:19:07.000Z","pushType":"push","commitsCount":465,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[inductor] fix scatter fallback and fallback in deterministic mode (#98339)\n\nFixes https://github.com/pytorch/pytorch/issues/93537\n\nadd `ir.ScatterFallback` to handle the mutation correctly of scatter/scatter_reduce fallback, also handle the case that `src` is a scalar, and lastly fallback in deterministic mode.\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/98339\nApproved by: https://github.com/jansel","shortMessageHtmlLink":"[inductor] fix scatter fallback and fallback in deterministic mode (p…"}},{"before":"931a4913b1c74697711cb0a51bb227b89ba27f24","after":"2bca64ae2856b1ad7353648f486a79fa1909cb7a","ref":"refs/heads/master","pushedAt":"2023-03-28T01:33:52.024Z","pushType":"push","commitsCount":349,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[Vulkan] Merge upsample_nearest2d and quantized_upsample_nearest2d (#97467)\n\nSummary: Merging quantized_upsample_nearest2d into upsample_nearest2d. Therefore, at::upsample_nearest2d can handle quantized vulkan input tensors.\n\nTest Plan:\nOn Mac\n```\ncd ~/fbsource\nbuck1 run -c pt.vulkan_full_precision=1 //xplat/caffe2:pt_vulkan_quantized_api_test_binAppleMac\\#macosx-arm64\n```\n\nOn Android\n```\ncd ~/fbsource\nbuck1 build -c ndk.custom_libcxx=false -c pt.enable_qpl=0 -c pt.vulkan_full_precision=1 //xplat/caffe2:pt_vulkan_quantized_api_test_binAndroid\\#android-arm64 --show-output\nadb push buck-out/gen/xplat/caffe2/pt_vulkan_quantized_api_test_binAndroid\\#android-arm64 /data/local/tmp/vulkan_quantized_api_test\nadb shell \"/data/local/tmp/vulkan_quantized_api_test\"\n```\n\nReviewed By: SS-JIA\n\nDifferential Revision: D44118212\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/97467\nApproved by: https://github.com/SS-JIA","shortMessageHtmlLink":"[Vulkan] Merge upsample_nearest2d and quantized_upsample_nearest2d (p…"}},{"before":"429091140e6dc629bdc86540f3a7ce42d46a4d70","after":"931a4913b1c74697711cb0a51bb227b89ba27f24","ref":"refs/heads/master","pushedAt":"2023-03-16T19:07:39.829Z","pushType":"push","commitsCount":206,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[inductor] Refactor memory management code in wrapper codegen (#96768)\n\nSummary: use inheritance to simplify CppWrapperCodeGen and to prepare for AOT codegen\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/96768\nApproved by: https://github.com/jansel","shortMessageHtmlLink":"[inductor] Refactor memory management code in wrapper codegen (pytorc…"}},{"before":"14efb81c9201a5049e2680802de5c5c7194d8f1e","after":null,"ref":"refs/heads/revert-96360","pushedAt":"2023-03-10T22:47:11.837Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"}},{"before":null,"after":"14efb81c9201a5049e2680802de5c5c7194d8f1e","ref":"refs/heads/revert-96360","pushedAt":"2023-03-10T20:53:13.581Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"Revert \"[PyTorch] Use c10::FastMap for memoizing in Pickler (#96360)\"\n\nThis reverts commit 69d3fa2e4d93f3367ceb3af62d78aedd317dca6c.","shortMessageHtmlLink":"Revert \"[PyTorch] Use c10::FastMap for memoizing in Pickler (pytorch#…"}},{"before":"0d5c849d48b9c421b6b98c42a6375a4a0b2dbba4","after":"429091140e6dc629bdc86540f3a7ce42d46a4d70","ref":"refs/heads/master","pushedAt":"2023-03-10T20:49:53.815Z","pushType":"push","commitsCount":2752,"pusher":{"login":"izaitsevfb","name":"Ivan Zaitsev","path":"/izaitsevfb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/108101595?s=80&v=4"},"commit":{"message":"[BE][MPS] Use convenience functions (#96521)\n\nIntroduce `getMPSScalarType(const Tensor&)` that calls `getMPSScalarType(t.scalar_type())`\nAnd replace `getMPSScalarType(t.scalar_type)` with `getMPSScalarType(t)` throughout the codebase\n\nFixes #ISSUE_NUMBER\n\nPull Request resolved: https://github.com/pytorch/pytorch/pull/96521\nApproved by: https://github.com/seemethere","shortMessageHtmlLink":"[BE][MPS] Use convenience functions (pytorch#96521)"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAD5i7anAA","startCursor":null,"endCursor":null}},"title":"Activity · izaitsevfb/pytorch"}