Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xfs Corruption when runing “ /usr/bin/vmware-uninstall-tools.pl" #144

Open
grass-lu opened this issue May 13, 2024 · 5 comments
Open

xfs Corruption when runing “ /usr/bin/vmware-uninstall-tools.pl" #144

grass-lu opened this issue May 13, 2024 · 5 comments

Comments

@grass-lu
Copy link

xfs corrupt when cold convert vmmk image
1、libguestfs 1.150.1
error message:
libguestfs: trace: v2v: command "/usr/bin/vmware-uninstall-tools.pl"
ook 0.00 secs
guestfsd: <= internal_write_append (0x122) request length 116 bytes
guestfsd: => internal_write_append (0x122) took 0.00 secs
guestfsd: <= command (0x32) request length 84 bytes
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: mount --bind /dev /sysroot/dev
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: mount --bind /dev/pts /sysroot/dev/pts
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: mount --bind /proc /sysroot/proc
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: mount --bind /sys/fs/selinux /sysroot/selinux
mount: /sysroot/selinux: mount point does not exist.
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: mount --bind /sys /sysroot/sys
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: mount --bind /sys/fs/selinux /sysroot/sys/fs/selinux
mount: /sysroot/sys/fs/selinux: mount point does not exist.
renaming /sysroot/etc/resolv.conf to /sysroot/etc/embeyfnn
commandrvf: stdout=n stderr=n flags=0x0
commandrvf: cp /etc/resolv.conf /sysroot/etc/resolv.conf
commandrvf: stdout=y stderr=y flags=0x40000
commandrvf: /usr/bin/vmware-uninstall-tools.pl
[ 19.478075] ------------[ cut here ]------------
[ 19.478080] WARNING: CPU: 1 PID: 619 at fs/xfs/xfs_inode.c:1839 xfs_iunlink+0x154/0x1e0 [xfs]
[ 19.486542] Modules linked in: xfs dm_mod sg libcrc32c crc8 crc7 crc_itu_t virtiofs fuse ext4 mbcache jbd2 virtio_vdpa vdpa virtio_mem virtio_input virtio_dma_buf virtio_balloon virtio_scsi sd_mod t10_pi nd_pmem nd_btt virtio_net net_failover failover virtio_console virtio_blk ata_piix libata nfit libnvdimm crc32_generic crct10dif_pclmul crc32c_intel crc32_pclmul
[ 19.493290] CPU: 1 PID: 619 Comm: vmware-uninstal Not tainted 5.14.0-391.el9.x86_64 #1
[ 19.495071] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
[ 19.496560] RIP: 0010:xfs_iunlink+0x154/0x1e0 [xfs]
[ 19.497968] Code: 77 3a 4c 89 4c 24 08 e8 ca 73 69 ee 44 89 f6 48 8d bd c0 00 00 00 e8 6b 08 b5 ee 49 89 c4 48 85 c0 74 07 48 83 78 20 00 75 2c <0f> 0b e8 95 bc 69 ee 41 bc 8b ff ff ff e9 fb fe ff ff 48 c7 c6 e6
[ 19.501556] RSP: 0018:ffffa7c1c0d43c90 EFLAGS: 00010246
[ 19.502880] RAX: 0000000000000000 RBX: 00000000000d31d1 RCX: 000000000000000c
[ 19.504480] RDX: ffff8eec4b516480 RSI: ffff8eec4b516638 RDI: 0000000000072211
[ 19.506007] RBP: ffff8eec440c0a00 R08: 0000000000000000 R09: ffff8eec440c0ac0
[ 19.507893] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 19.509565] R13: ffff8eec4309f878 R14: 0000000000072211 R15: ffff8eec59e21000
[ 19.511044] FS: 00007f7d98490740(0000) GS:ffff8eecdce40000(0000) knlGS:0000000000000000
[ 19.512803] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 19.514040] CR2: 000000000189dc50 CR3: 000000009fe86001 CR4: 0000000000770ee0
[ 19.515562] PKRU: 55555554
[ 19.516125] Call Trace:
[ 19.516733]
[ 19.517160] ? show_trace_log_lvl+0x1c4/0x2df
[ 19.518413] ? show_trace_log_lvl+0x1c4/0x2df
[ 19.519554] ? xfs_remove+0x269/0x390 [xfs]
[ 19.520778] ? xfs_iunlink+0x154/0x1e0 [xfs]
[ 19.522129] ? __warn+0x81/0x110
[ 19.523082] ? xfs_iunlink+0x154/0x1e0 [xfs]
[ 19.524360] ? report_bug+0x10a/0x140
[ 19.525100] ? handle_bug+0x3c/0x70
[ 19.525924] ? exc_invalid_op+0x14/0x70
[ 19.526812] ? asm_exc_invalid_op+0x16/0x20
[ 19.527793] ? xfs_iunlink+0x154/0x1e0 [xfs]
[ 19.528908] xfs_remove+0x269/0x390 [xfs]
[ 19.530005] xfs_vn_unlink+0x53/0xa0 [xfs]
[ 19.530999] vfs_unlink+0x114/0x290
[ 19.531771] do_unlinkat+0x1af/0x2e0
[ 19.532666] __x64_sys_unlink+0x3e/0x60
[ 19.533561] do_syscall_64+0x59/0x90
[ 19.534418] ? __do_sys_newlstat+0x42/0x70
[ 19.535601] ? syscall_exit_to_user_mode+0x22/0x40
[ 19.536913] ? do_syscall_64+0x69/0x90
[ 19.537983] ? syscall_exit_to_user_mode+0x22/0x40
[ 19.539184] ? do_syscall_64+0x69/0x90
[ 19.540006] ? syscall_exit_to_user_mode+0x22/0x40
[ 19.541037] ? do_syscall_64+0x69/0x90
[ 19.541911] ? exc_page_fault+0x62/0x150
[ 19.542781] entry_SYSCALL_64_after_hwframe+0x72/0xdc
[ 19.543918] RIP: 0033:0x7f7d95ee9107
[ 19.544772] Code: 48 3d 00 f0 ff ff 77 03 48 98 c3 48 8b 15 79 0d 2d 00 f7 d8 64 89 02 48 83 c8 ff eb eb 66 0f 1f 44 00 00 b8 57 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 0d 2d 00 f7 d8 64 89 01 48
[ 19.549012] RSP: 002b:00007fff7ab8e9c8 EFLAGS: 00000206 ORIG_RAX: 0000000000000057
[ 19.550554] RAX: ffffffffffffffda RBX: 00000000016d8010 RCX: 00007f7d95ee9107
[ 19.551959] RDX: 00000000016d80a8 RSI: 00000000016d80a8 RDI: 0000000001bea230
[ 19.553462] RBP: 0000000001bfeb58 R08: 00007f7d97f5baf0 R09: fffffffffff8d6e4
[ 19.555028] R10: 0000000000000488 R11: 0000000000000206 R12: 0000000000000001
[ 19.556708] R13: 0000000001bea230 R14: 0000000001bfeb58 R15: 00000000016d80a8
[ 19.558124]
[ 19.558793] ---[ end trace 25886a30aa4d5526 ]---
[ 19.559908] XFS (dm-0): Internal error xfs_trans_cancel at line 1097 of file fs/xfs/xfs_trans.c. Caller xfs_remove+0x168/0x390 [xfs]
[ 19.560129] CPU: 1 PID: 619 Comm: vmware-uninstal Tainted: G W ------- --- 5.14.0-391.el9.x86_64 #1
[ 19.560132] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
[ 19.560133] Call Trace:
[ 19.560136]
[ 19.560138] dump_stack_lvl+0x34/0x48
[ 19.560144] xfs_trans_cancel+0x123/0x150 [xfs]
[ 19.560299] xfs_remove+0x168/0x390 [xfs]
tee: /var/log/vm[ 19.560433] xfs_vn_unlink+0x53/0xa0 [xfs]
ware-install.log[ 19.560615] vfs_unlink+0x114/0x290
: Input/output e[ 19.560621] do_unlinkat+0x1af/0x2e0
rror
commandrvf[ 19.560624] __x64_sys_unlink+0x3e/0x60
: stdout=n stder[ 19.560625] do_syscall_64+0x59/0x90
r=n flags=0x0
c[ 19.560631] ? __do_sys_newlstat+0x42/0x70
ommandrvf: umoun[ 19.560636] ? syscall_exit_to_user_mode+0x22/0x40
t /sysroot/sys
[ 19.560639] ? do_syscall_64+0x69/0x90
[ 19.560642] ? syscall_exit_to_user_mode+0x22/0x40
[ 19.560644] ? do_syscall_64+0x69/0x90
[ 19.560647] ? syscall_exit_to_user_mode+0x22/0x40
[ 19.560650] ? do_syscall_64+0x69/0x90
[ 19.560653] ? exc_page_fault+0x62/0x150
commandrvf: stdo[ 19.560656] entry_SYSCALL_64_after_hwframe+0x72/0xdc
ut=n stderr=n fl[ 19.560663] RIP: 0033:0x7f7d95ee9107
ags=0x0
command[ 19.560681] Code: 48 3d 00 f0 ff ff 77 03 48 98 c3 48 8b 15 79 0d 2d 00 f7 d8 64 89 02 48 83 c8 ff eb eb 66 0f 1f 44 00 00 b8 57 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 0d 2d 00 f7 d8 64 89 01 48
[ 19.560683] RSP: 002b:00007fff7ab8e9c8 EFLAGS: 00000206 ORIG_RAX: 0000000000000057
rvf: umount /sys[ 19.560686] RAX: ffffffffffffffda RBX: 00000000016d8010 RCX: 00007f7d95ee9107
root/proc
[ 19.560688] RDX: 00000000016d80a8 RSI: 00000000016d80a8 RDI: 0000000001bea230
[ 19.560689] RBP: 0000000001bfeb58 R08: 00007f7d97f5baf0 R09: fffffffffff8d6e4
[ 19.560691] R10: 0000000000000488 R11: 0000000000000206 R12: 0000000000000001
[ 19.560692] R13: 0000000001bea230 R14: 0000000001bfeb58 R15: 00000000016d80a8
[ 19.560695]
commandrvf: stdo[ 19.562391] XFS (dm-0): Corruption of in-memory data (0x8) detected at xfs_trans_cancel+0x13c/0x150 [xfs] (fs/xfs/xfs_trans.c:1098). Shutting down filesystem.
[ 19.562599] XFS (dm-0): Please unmount the filesystem and rectify the problem(s)
ut=n stderr=n fl[ 19.562689] XFS (dm-0): xfs_difree: xfs_ialloc_read_agi() returned error -5.

the full message:

@grass-lu
Copy link
Author

the full message:
full_message_cold.txt

@rwmjones
Copy link
Member

I think what happened is that virt-v2v ran the VMware program /usr/bin/vmware-uninstall-tools.pl inside the guest (to uninstall VMware Tools). However the guest filesystem has some sort of disk corruption - either it was corrupt already, or it was corrupted by something else we did - and that corruption was so bad that the kernel could no longer continue.

I think the first thing to do here is:

If filesystems are corrupt in the guest before conversion then you'll have to fix that.

@grass-lu
Copy link
Author

the original guest filesystem is good. i rebuild libguest-appliance, the
problem occur

@grass-lu
Copy link
Author

the official release libguest-appliance is good, no problem

@rwmjones
Copy link
Member

So it's likely because of a difference in the kernel version in the released libguestfs appliance and the one you were using before (5.14.0-391.el9.x86_64). Nevertheless, this is most likely a problem with the guest filesystem being corrupt or corrupted, not a problem in libguestfs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants