X-Git-Url: https://git.whamcloud.com/?a=blobdiff_plain;f=LustreDebugging.xml;h=009f864a3239025cabe86d7d8ee70fc09c3daeb7;hb=f4fe3ca4b3df0d49a691a4eacca891ea782edb7b;hp=702b36982e42b65cbbcf26540475b93d272ae741;hpb=04f4ce614afadb717a1ea3e685286950315979be;p=doc%2Fmanual.git diff --git a/LustreDebugging.xml b/LustreDebugging.xml index 702b369..009f864 100644 --- a/LustreDebugging.xml +++ b/LustreDebugging.xml @@ -4,7 +4,7 @@ Lustre Debugging - This chapter describes tips and information to debug Lustre, and includes the following sections: + This chapter describes tips and information to debug Lustre, and includes the following sections: @@ -21,120 +21,120 @@
28.1 Diagnostic and Debugging Tools - A variety of diagnostic and analysis tools are available to debug issues with the Lustre software. Some of these are provided in Linux distributions, while others have been developed and are made available by the Lustre project. + A variety of diagnostic and analysis tools are available to debug issues with the Lustre software. Some of these are provided in Linux distributions, while others have been developed and are made available by the Lustre project.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295667" xreflabel=""/>28.1.1 Lustre Debugging Tools - The following in-kernel debug mechanisms are incorporated into the Lustre software: + 28.1.1 Lustre Debugging Tools + The following in-kernel debug mechanisms are incorporated into the Lustre software: - Debug logs - A circular debug buffer to which Lustre internal debug messages are written (in contrast to error messages, which are printed to the syslog or console). Entries to the Lustre debug log are controlled by the mask set by /proc/sys/lnet/debug. The log size defaults to 5 MB per CPU but can be increased as a busy system will quickly overwrite 5 MB. When the buffer fills, the oldest information is discarded. + Debug logs - A circular debug buffer to which Lustre internal debug messages are written (in contrast to error messages, which are printed to the syslog or console). Entries to the Lustre debug log are controlled by the mask set by /proc/sys/lnet/debug. The log size defaults to 5 MB per CPU but can be increased as a busy system will quickly overwrite 5 MB. When the buffer fills, the oldest information is discarded. - Debug daemon - The debug daemon controls logging of debug messages. + Debug daemon - The debug daemon controls logging of debug messages. - /proc/sys/lnet/debug - This file contains a mask that can be used to delimit the debugging information written out to the kernel debug logs. + /proc/sys/lnet/debug - This file contains a mask that can be used to delimit the debugging information written out to the kernel debug logs. - The following tools are also provided with the Lustre software: + The following tools are also provided with the Lustre software: - lctl - This tool is used with the debug_kernel option to manually dump the Lustre debugging log or post-process debugging logs that are dumped automatically. For more information about the lctl tool, see and (lctl). + lctl - This tool is used with the debug_kernel option to manually dump the Lustre debugging log or post-process debugging logs that are dumped automatically. For more information about the lctl tool, see and (lctl). - Lustre subsystem asserts - A panic-style assertion (LBUG) in the kernel causes Lustre to dump the debug log to the file /tmp/lustre-log.<timestamp> where it can be retrieved after a reboot. For more information, see Viewing Error Messages. + Lustre subsystem asserts - A panic-style assertion (LBUG) in the kernel causes Lustre to dump the debug log to the file /tmp/lustre-log.<timestamp> where it can be retrieved after a reboot. For more information, see Viewing Error Messages. - lfs - This utility provides access to the extended attributes (EAs) of a Lustre file (along with other information). For more inforamtion about lfs, see lfs. + lfs - This utility provides access to the extended attributes (EAs) of a Lustre file (along with other information). For more inforamtion about lfs, see lfs.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295688" xreflabel=""/>28.1.2 External Debugging Tools - The tools described in this section are provided in the Linux kernel or are available at an external website. For information about using some of these tools for Lustre debugging, see Lustre Debugging Procedures and Lustre Debugging for Developers. + 28.1.2 External Debugging Tools + The tools described in this section are provided in the Linux kernel or are available at an external website. For information about using some of these tools for Lustre debugging, see Lustre Debugging Procedures and Lustre Debugging for Developers.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295696" xreflabel=""/>28.1.2.1 Tools for Administrators and Developers - Some general debugging tools provided as a part of the standard Linux distro are: + 28.1.2.1 Tools for Administrators and Developers + Some general debugging tools provided as a part of the standard Linux distro are: - strace . This tool allows a system call to be traced. + strace . This tool allows a system call to be traced. - /var/log/messages . syslogd prints fatal or serious messages at this log. + /var/log/messages . syslogd prints fatal or serious messages at this log. - Crash dumps . On crash-dump enabled kernels, sysrq c produces a crash dump. Lustre enhances this crash dump with a log dump (the last 64 KB of the log) to the console. + Crash dumps . On crash-dump enabled kernels, sysrq c produces a crash dump. Lustre enhances this crash dump with a log dump (the last 64 KB of the log) to the console. - debugfs . Interactive file system debugger. + debugfs . Interactive file system debugger. - The following logging and data collection tools can be used to collect information for debugging Lustre kernel issues: + The following logging and data collection tools can be used to collect information for debugging Lustre kernel issues: - kdump . A Linux kernel crash utility useful for debugging a system running Red Hat Enterprise Linux. For more information about kdump, see the Red Hat knowledge base article How do I configure kexec/kdump on Red Hat Enterprise Linux 5?. To download kdump, go to the Fedora Project Download site. + kdump . A Linux kernel crash utility useful for debugging a system running Red Hat Enterprise Linux. For more information about kdump, see the Red Hat knowledge base article How do I configure kexec/kdump on Red Hat Enterprise Linux 5?. To download kdump, go to the Fedora Project Download site. - netconsole . Enables kernel-level network logging over UDP. A system requires (SysRq) allows users to collect relevant data through netconsole. + netconsole . Enables kernel-level network logging over UDP. A system requires (SysRq) allows users to collect relevant data through netconsole. - netdump . A crash dump utility from Red Hat that allows memory images to be dumped over a network to a central server for analysis. The netdump utility was replaced by kdump in RHEL 5. For more information about netdump, see Red Hat, Inc.'s Network Console and Crash Dump Facility. + netdump . A crash dump utility from Red Hat that allows memory images to be dumped over a network to a central server for analysis. The netdump utility was replaced by kdump in RHEL 5. For more information about netdump, see Red Hat, Inc.'s Network Console and Crash Dump Facility.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295709" xreflabel=""/>28.1.2.2 Tools for Developers - The tools described in this section may be useful for debugging Lustre in a development environment. - Of general interest is: + 28.1.2.2 Tools for Developers + The tools described in this section may be useful for debugging Lustre in a development environment. + Of general interest is: - leak_finder.pl . This program provided with Lustre is useful for finding memory leaks in the code. + leak_finder.pl . This program provided with Lustre is useful for finding memory leaks in the code. - A virtual machine is often used to create an isolated development and test environment. Some commonly-used virtual machines are: + A virtual machine is often used to create an isolated development and test environment. Some commonly-used virtual machines are: - VirtualBox Open Source Edition . Provides enterprise-class virtualization capability for all major platforms and is available free at Get Sun VirtualBox. + VirtualBox Open Source Edition . Provides enterprise-class virtualization capability for all major platforms and is available free at Get Sun VirtualBox. - VMware Server . Virtualization platform available as free introductory software at Download VMware Server. + VMware Server . Virtualization platform available as free introductory software at Download VMware Server. - Xen . A para-virtualized environment with virtualization capabilities similar to VMware Server and Virtual Box. However, Xen allows the use of modified kernels to provide near-native performance and the ability to emulate shared storage. For more information, go to xen.org. + Xen . A para-virtualized environment with virtualization capabilities similar to VMware Server and Virtual Box. However, Xen allows the use of modified kernels to provide near-native performance and the ability to emulate shared storage. For more information, go to xen.org. - A variety of debuggers and analysis tools are available including: + A variety of debuggers and analysis tools are available including: - kgdb . The Linux Kernel Source Level Debugger kgdb is used in conjunction with the GNU Debugger gdb for debugging the Linux kernel. For more information about using kgdb with gdb, see Chapter 6. Running Programs Under gdb in the Red Hat Linux 4 Debugging with GDB guide. + kgdb . The Linux Kernel Source Level Debugger kgdb is used in conjunction with the GNU Debugger gdb for debugging the Linux kernel. For more information about using kgdb with gdb, see Chapter 6. Running Programs Under gdb in the Red Hat Linux 4 Debugging with GDB guide. - crash . Used to analyze saved crash dump data when a system had panicked or locked up or appears unresponsive. For more information about using crash to analyze a crash dump, see: + crash . Used to analyze saved crash dump data when a system had panicked or locked up or appears unresponsive. For more information about using crash to analyze a crash dump, see: - Red Hat Magazine article: A quick overview of Linux kernel crash dump analysis + Red Hat Magazine article: A quick overview of Linux kernel crash dump analysis - Crash Usage: A Case Study from the white paper Red Hat Crash Utility by David Anderson + Crash Usage: A Case Study from the white paper Red Hat Crash Utility by David Anderson - Kernel Trap forum entry: Linux: Kernel Crash Dumps + Kernel Trap forum entry: Linux: Kernel Crash Dumps - White paper: A Quick Overview of Linux Kernel Crash Dump Analysis + White paper: A Quick Overview of Linux Kernel Crash Dump Analysis @@ -145,23 +145,23 @@
28.2 Lustre Debugging Procedures - The procedures below may be useful to administrators or developers debugging a Lustre files system. + The procedures below may be useful to administrators or developers debugging a Lustre files system.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295735" xreflabel=""/>28.2.1 Understanding the Lustre Debug Messaging Format - Lustre debug messages are categorized by originating sybsystem, message type, and locaton in the source code. For a list of subsystems and message types, see . + 28.2.1 Understanding the Lustre Debug Messaging Format + Lustre debug messages are categorized by originating sybsystem, message type, and locaton in the source code. For a list of subsystems and message types, see . For a current list of subsystems and debug message types, see lnet/include/libcfs/libcfs.h in the Lustre tree - The elements of a Lustre debug message are described in Format of Lustre Debug Messages. + The elements of a Lustre debug message are described in Format of Lustre Debug Messages.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295747" xreflabel=""/>28.2.1.1 <anchor xml:id="dbdoclet.50438274_57603" xreflabel=""/>Lustre <anchor xml:id="dbdoclet.50438274_marker-1295746" xreflabel=""/>Debug Messages - Each Lustre debug message has the tag of the subsystem it originated in, the message type, and the location in the source code. The subsystems and debug types used in Lustre are as follows: + 28.2.1.1 <anchor xml:id="dbdoclet.50438274_57603" xreflabel=""/>Lustre <anchor xml:id="dbdoclet.50438274_marker-1295746" xreflabel=""/>Debug Messages + Each Lustre debug message has the tag of the subsystem it originated in, the message type, and the location in the source code. The subsystems and debug types used in Lustre are as follows: - Standard Subsystems: + Standard Subsystems: - mdc, mds, osc, ost, obdclass, obdfilter, llite, ptlrpc, portals, lnd, ldlm, lov + mdc, mds, osc, ost, obdclass, obdfilter, llite, ptlrpc, portals, lnd, ldlm, lov - Debug Types: + Debug Types: @@ -170,94 +170,94 @@ - Types - Description + Types + Description - trace - Entry/Exit markers + trace + Entry/Exit markers - dlmtrace - Locking-related information + dlmtrace + Locking-related information - inode -   + inode +   - super -   + super +   - ext2 - Anything from the ext2_debug + ext2 + Anything from the ext2_debug - malloc - Print malloc or free information + malloc + Print malloc or free information - cache - Cache-related information + cache + Cache-related information - info - General information + info + General information - ioctl - IOCTL-related information + ioctl + IOCTL-related information - blocks - Ext2 block allocation information + blocks + Ext2 block allocation information - net - Networking + net + Networking - warning -   + warning +   - buffs -   + buffs +   - other -   + other +   - dentry -   + dentry +   - portals - Entry/Exit markers + portals + Entry/Exit markers - page - Bulk page handling + page + Bulk page handling - error - Error messages + error + Error messages - emerg -   + emerg +   - rpctrace - For distributed debugging + rpctrace + For distributed debugging - ha - Failover and recovery-related information + ha + Failover and recovery-related information @@ -267,379 +267,379 @@
- <anchor xml:id="dbdoclet.50438274_pgfId-1295842" xreflabel=""/>28.2.1.2 <anchor xml:id="dbdoclet.50438274_57177" xreflabel=""/>Format of Lustre Debug Messages - Lustre uses the CDEBUG and CERROR macros to print the debug or error messages. To print the message, the CDEBUG macro uses portals_debug_msg (portals/linux/oslib/debug.c). The message format is described below, along with an example. + 28.2.1.2 <anchor xml:id="dbdoclet.50438274_57177" xreflabel=""/>Format of Lustre Debug Messages + Lustre uses the CDEBUG and CERROR macros to print the debug or error messages. To print the message, the CDEBUG macro uses portals_debug_msg (portals/linux/oslib/debug.c). The message format is described below, along with an example. - Parameter - Description + Parameter + Description - subsystem - 800000 + subsystem + 800000 - debug mask - 000010 + debug mask + 000010 - smp_processor_id - 0 + smp_processor_id + 0 - sec.used - 10818808 47.677302 + sec.used + 10818808 47.677302 - stack size - 1204: + stack size + 1204: - pid - 2973: + pid + 2973: - host pid (if uml) or zero - 31070: + host pid (if uml) or zero + 31070: - (file:line #:functional()) - (as_dev.c:144:create_write_buffers()) + (file:line #:functional()) + (as_dev.c:144:create_write_buffers()) - debug message - kmalloced '*obj': 24 at a375571c (tot 17447717) + debug message + kmalloced '*obj': 24 at a375571c (tot 17447717)
- <anchor xml:id="dbdoclet.50438274_pgfId-1295886" xreflabel=""/>28.2.1.3 Lustre <anchor xml:id="dbdoclet.50438274_marker-1295885" xreflabel=""/>Debug Messages Buffer - Lustre debug messages are maintained in a buffer, with the maximum buffer size specified (in MBs) by the debug_mb parameter (/proc/sys/lnet/debug_mb). The buffer is circular, so debug messages are kept until the allocated buffer limit is reached, and then the first messages are overwritten. + 28.2.1.3 Lustre <anchor xml:id="dbdoclet.50438274_marker-1295885" xreflabel=""/>Debug Messages Buffer + Lustre debug messages are maintained in a buffer, with the maximum buffer size specified (in MBs) by the debug_mb parameter (/proc/sys/lnet/debug_mb). The buffer is circular, so debug messages are kept until the allocated buffer limit is reached, and then the first messages are overwritten.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295889" xreflabel=""/>28.2.2 <anchor xml:id="dbdoclet.50438274_62472" xreflabel=""/>Using the lctl Tool to View Debug Messages - The lctl tool allows debug messages to be filtered based on subsystems and message types to extract information useful for troubleshooting from a kernel debug log. For a command reference, see lctl. - You can use lctl to: + 28.2.2 <anchor xml:id="dbdoclet.50438274_62472" xreflabel=""/>Using the lctl Tool to View Debug Messages + The lctl tool allows debug messages to be filtered based on subsystems and message types to extract information useful for troubleshooting from a kernel debug log. For a command reference, see lctl. + You can use lctl to: - Obtain a list of all the types and subsystems: + Obtain a list of all the types and subsystems: - lctl > debug_list <subs | types> + lctl > debug_list <subs | types> - Filter the debug log: + Filter the debug log: - lctl > filter <subsystem name | debug type> + lctl > filter <subsystem name | debug type> When lctl filters, it removes unwanted lines from the displayed output. This does not affect the contents of the debug log in the kernel's memory. As a result, you can print the log many times with different filtering levels without worrying about losing data. - Show debug messages belonging to certain subsystem or type: + Show debug messages belonging to certain subsystem or type: - lctl > show <subsystem name | debug type> - debug_kernel pulls the data from the kernel logs, filters it appropriately, and displays or saves it as per the specified options - lctl > debug_kernel [output filename] + lctl > show <subsystem name | debug type> + debug_kernel pulls the data from the kernel logs, filters it appropriately, and displays or saves it as per the specified options + lctl > debug_kernel [output filename] - If the debugging is being done on User Mode Linux (UML), it might be useful to save the logs on the host machine so that they can be used at a later time. + If the debugging is being done on User Mode Linux (UML), it might be useful to save the logs on the host machine so that they can be used at a later time. - Filter a log on disk, if you already have a debug log saved to disk (likely from a crash): + Filter a log on disk, if you already have a debug log saved to disk (likely from a crash): - lctl > debug_file <input filename> [output filename] + lctl > debug_file <input filename> [output filename] - During the debug session, you can add markers or breaks to the log for any reason: - lctl > mark [marker text] + During the debug session, you can add markers or breaks to the log for any reason: + lctl > mark [marker text] - The marker text defaults to the current date and time in the debug log (similar to the example shown below): - DEBUG MARKER: Tue Mar 5 16:06:44 EST 2002 + The marker text defaults to the current date and time in the debug log (similar to the example shown below): + DEBUG MARKER: Tue Mar 5 16:06:44 EST 2002 - Completely flush the kernel debug buffer: + Completely flush the kernel debug buffer: - lctl > clear + lctl > clear Debug messages displayed with lctl are also subject to the kernel debug masks; the filters are additive.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295915" xreflabel=""/>28.2.2.1 Sample lctl<anchor xml:id="dbdoclet.50438274_marker-1295914" xreflabel=""/>Run - Below is a sample run using the lctl command. - bash-2.04# ./lctl -lctl > debug_kernel /tmp/lustre_logs/log_all -Debug log: 324 lines, 324 kept, 0 dropped. -lctl > filter trace -Disabling output of type "trace" -lctl > debug_kernel /tmp/lustre_logs/log_notrace -Debug log: 324 lines, 282 kept, 42 dropped. -lctl > show trace -Enabling output of type "trace" -lctl > filter portals -Disabling output from subsystem "portals" -lctl > debug_kernel /tmp/lustre_logs/log_noportals -Debug log: 324 lines, 258 kept, 66 dropped. + 28.2.2.1 Sample lctl<anchor xml:id="dbdoclet.50438274_marker-1295914" xreflabel=""/>Run + Below is a sample run using the lctl command. + bash-2.04# ./lctl +lctl > debug_kernel /tmp/lustre_logs/log_all +Debug log: 324 lines, 324 kept, 0 dropped. +lctl > filter trace +Disabling output of type "trace" +lctl > debug_kernel /tmp/lustre_logs/log_notrace +Debug log: 324 lines, 282 kept, 42 dropped. +lctl > show trace +Enabling output of type "trace" +lctl > filter portals +Disabling output from subsystem "portals" +lctl > debug_kernel /tmp/lustre_logs/log_noportals +Debug log: 324 lines, 258 kept, 66 dropped.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295930" xreflabel=""/>28.2.3 Dumping the Buffer to a File (debug_daemon) - The debug_daemon option is used by lctl to control the dumping of the debug_kernel buffer to a user-specified file. This functionality uses a kernel thread on top of debug_kernel, which works in parallel with the debug_daemon command. - The debug_daemon is highly dependent on file system write speed. File system write operations may not be fast enough to flush out all of the debug_buffer if the Lustre file system is under heavy system load and continues to CDEBUG to the debug_buffer. The debug_daemon will write the message DEBUG MARKER: Trace buffer full into the debug_buffer to indicate the debug_buffer contents are overlapping before the debug_daemon flushes data to a file. - Users can use lctlcontrol to start or stop the Lustre daemon from dumping the debug_buffer to a file. Users can also temporarily hold daemon from dumping the file. Use of the debug_daemon sub-command to lctl can provide the same function. + 28.2.3 Dumping the Buffer to a File (debug_daemon) + The debug_daemon option is used by lctl to control the dumping of the debug_kernel buffer to a user-specified file. This functionality uses a kernel thread on top of debug_kernel, which works in parallel with the debug_daemon command. + The debug_daemon is highly dependent on file system write speed. File system write operations may not be fast enough to flush out all of the debug_buffer if the Lustre file system is under heavy system load and continues to CDEBUG to the debug_buffer. The debug_daemon will write the message DEBUG MARKER: Trace buffer full into the debug_buffer to indicate the debug_buffer contents are overlapping before the debug_daemon flushes data to a file. + Users can use lctlcontrol to start or stop the Lustre daemon from dumping the debug_buffer to a file. Users can also temporarily hold daemon from dumping the file. Use of the debug_daemon sub-command to lctl can provide the same function.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295934" xreflabel=""/>28.2.3.1 lctldebug_daemon Commands - This section describes lctldebug_daemon commands. - To initiate the debug_daemon to start dumping debug_buffer into a file., enter - $ lctl debug_daemon start [{file} {megabytes}] + 28.2.3.1 lctldebug_daemon Commands + This section describes lctldebug_daemon commands. + To initiate the debug_daemon to start dumping debug_buffer into a file., enter + $ lctl debug_daemon start [{file} {megabytes}] - The file can be a system default file, as shown in /proc/sys/lnet/debug_path. After Lustre starts, the default path is /tmp/lustre-log-$HOSTNAME. Users can specify a new filename for debug_daemon to output debug_buffer. The new file name shows up in /proc/sys/lnet/debug_path. Megabytes is the limitation of the file size in MBs. - The daemon wraps around and dumps data to the beginning of the file when the output file size is over the limit of the user-specified file size. To decode the dumped file to ASCII and order the log entries by time, run: - lctl debug_file {file} > {newfile} + The file can be a system default file, as shown in /proc/sys/lnet/debug_path. After Lustre starts, the default path is /tmp/lustre-log-$HOSTNAME. Users can specify a new filename for debug_daemon to output debug_buffer. The new file name shows up in /proc/sys/lnet/debug_path. Megabytes is the limitation of the file size in MBs. + The daemon wraps around and dumps data to the beginning of the file when the output file size is over the limit of the user-specified file size. To decode the dumped file to ASCII and order the log entries by time, run: + lctl debug_file {file} > {newfile} - The output is internally sorted by the lctl command using quicksort. - To completely shut down the debug_daemon operation and flush the file output, enter: - debug_daemon stop + The output is internally sorted by the lctl command using quicksort. + To completely shut down the debug_daemon operation and flush the file output, enter: + debug_daemon stop - Otherwise, debug_daemon is shut down as part of the Lustre file system shutdown process. Users can restart debug_daemon by using start command after each stop command issued. - This is an example using debug_daemon with the interactive mode of lctl to dump debug logs to a 10 MB file. - #~/utils/lctl + Otherwise, debug_daemon is shut down as part of the Lustre file system shutdown process. Users can restart debug_daemon by using start command after each stop command issued. + This is an example using debug_daemon with the interactive mode of lctl to dump debug logs to a 10 MB file. + #~/utils/lctl - To start the daemon to dump debug_buffer into a 40 MB /tmp/dump file, enter: - lctl > debug_daemon start /trace/log 40 + To start the daemon to dump debug_buffer into a 40 MB /tmp/dump file, enter: + lctl > debug_daemon start /trace/log 40 - To completely shut down the daemon, enter: - lctl > debug_daemon stop + To completely shut down the daemon, enter: + lctl > debug_daemon stop - To start another daemon with an unlimited file size, enter: - lctl > debug_daemon start /tmp/unlimited + To start another daemon with an unlimited file size, enter: + lctl > debug_daemon start /tmp/unlimited - The text message *** End of debug_daemon trace log *** appears at the end of each output file. + The text message *** End of debug_daemon trace log *** appears at the end of each output file.
- <anchor xml:id="dbdoclet.50438274_pgfId-1295954" xreflabel=""/>28.2.4 Controlling Information Written to the Kernel <anchor xml:id="dbdoclet.50438274_marker-1295955" xreflabel=""/>Debug Log - Masks are provided in /proc/sys/lnet/subsystem_debug and /proc/sys/lnet/debug to be used with the systctl command to determine what information is to be written to the debug log. The subsystem_debug mask determines the information written to the log based on the subsystem (such as iobdfilter, net, portals, or OSC). The debug mask controls information based on debug type (such as info, error, trace, or alloc). - To turn off Lustre debugging completely: - sysctl -w lnet.debug=0 + 28.2.4 Controlling Information Written to the Kernel <anchor xml:id="dbdoclet.50438274_marker-1295955" xreflabel=""/>Debug Log + Masks are provided in /proc/sys/lnet/subsystem_debug and /proc/sys/lnet/debug to be used with the systctl command to determine what information is to be written to the debug log. The subsystem_debug mask determines the information written to the log based on the subsystem (such as iobdfilter, net, portals, or OSC). The debug mask controls information based on debug type (such as info, error, trace, or alloc). + To turn off Lustre debugging completely: + sysctl -w lnet.debug=0 - To turn on full Lustre debugging: - sysctl -w lnet.debug=-1 + To turn on full Lustre debugging: + sysctl -w lnet.debug=-1 - To turn on logging of messages related to network communications: - sysctl -w lnet.debug=net + To turn on logging of messages related to network communications: + sysctl -w lnet.debug=net - To turn on logging of messages related to network communications and existing debug flags: - sysctl -w lnet.debug=+net + To turn on logging of messages related to network communications and existing debug flags: + sysctl -w lnet.debug=+net - To turn off network logging with changing existing flags: - sysctl -w lnet.debug=-net + To turn off network logging with changing existing flags: + sysctl -w lnet.debug=-net - The various options available to print to kernel debug logs are listed in lnet/include/libcfs/libcfs.h + The various options available to print to kernel debug logs are listed in lnet/include/libcfs/libcfs.h
- <anchor xml:id="dbdoclet.50438274_pgfId-1295970" xreflabel=""/>28.2.5 <anchor xml:id="dbdoclet.50438274_26909" xreflabel=""/>Troubleshooting with strace<anchor xml:id="dbdoclet.50438274_marker-1295969" xreflabel=""/> - The strace utility provided with the Linux distribution enables system calls to be traced by intercepting all the system calls made by a process and recording the system call name, aruguments, and return values. - To invoke strace on a program, enter: - $ strace <program> <args> + 28.2.5 <anchor xml:id="dbdoclet.50438274_26909" xreflabel=""/>Troubleshooting with strace<anchor xml:id="dbdoclet.50438274_marker-1295969" xreflabel=""/> + The strace utility provided with the Linux distribution enables system calls to be traced by intercepting all the system calls made by a process and recording the system call name, aruguments, and return values. + To invoke strace on a program, enter: + $ strace <program> <args> - Sometimes, a system call may fork child processes. In this situation, use the -f option of strace to trace the child processes: - $ strace -f <program> <args> + Sometimes, a system call may fork child processes. In this situation, use the -f option of strace to trace the child processes: + $ strace -f <program> <args> - To redirect the strace output to a file, enter: - $ strace -o <filename> <program> <args> + To redirect the strace output to a file, enter: + $ strace -o <filename> <program> <args> - Use the -ff option, along with -o, to save the trace output in filename.pid, where pid is the process ID of the process being traced. Use the -ttt option to timestamp all lines in the strace output, so they can be correlated to operations in the lustre kernel debug log. - If the debugging is done in UML, save the traces on the host machine. In this example, hostfs is mounted on /r: - $ strace -o /r/tmp/vi.strace + Use the -ff option, along with -o, to save the trace output in filename.pid, where pid is the process ID of the process being traced. Use the -ttt option to timestamp all lines in the strace output, so they can be correlated to operations in the lustre kernel debug log. + If the debugging is done in UML, save the traces on the host machine. In this example, hostfs is mounted on /r: + $ strace -o /r/tmp/vi.strace
- <anchor xml:id="dbdoclet.50438274_pgfId-1295983" xreflabel=""/>28.2.6 <anchor xml:id="dbdoclet.50438274_54455" xreflabel=""/>Looking at Disk <anchor xml:id="dbdoclet.50438274_marker-1295982" xreflabel=""/>Content - In Lustre, the inodes on the metadata server contain extended attributes (EAs) that store information about file striping. EAs contain a list of all object IDs and their locations (that is, the OST that stores them). The lfs tool can be used to obtain this information for a given file using the getstripe subcommand. Use a corresponding lfs setstripe command to specify striping attributes for a new file or directory. - The lfsgetstripe utility is written in C; it takes a Lustre filename as input and lists all the objects that form a part of this file. To obtain this information for the file /mnt/lustre/frog in Lustre file system, run: - $ lfs getstripe /mnt/lustre/frog -$ - obdix objid - 0 17 - 1 4 - - The debugfs tool is provided in the e2fsprogs package. It can be used for interactive debugging of an ldiskfs file system. The debugfs tool can either be used to check status or modify information in the file system. In Lustre, all objects that belong to a file are stored in an underlying ldiskfs file system on the OSTs. The file system uses the object IDs as the file names. Once the object IDs are known, use the debugfs tool to obtain the attributes of all objects from different OSTs. - A sample run for the /mnt/lustre/frog file used in the above example is shown here: - $ debugfs -c /tmp/ost1 - debugfs: cd O - debugfs: cd 0 /* for files in group 0 \ + 28.2.6 <anchor xml:id="dbdoclet.50438274_54455" xreflabel=""/>Looking at Disk <anchor xml:id="dbdoclet.50438274_marker-1295982" xreflabel=""/>Content + In Lustre, the inodes on the metadata server contain extended attributes (EAs) that store information about file striping. EAs contain a list of all object IDs and their locations (that is, the OST that stores them). The lfs tool can be used to obtain this information for a given file using the getstripe subcommand. Use a corresponding lfs setstripe command to specify striping attributes for a new file or directory. + The lfsgetstripe utility is written in C; it takes a Lustre filename as input and lists all the objects that form a part of this file. To obtain this information for the file /mnt/lustre/frog in Lustre file system, run: + $ lfs getstripe /mnt/lustre/frog +$ + obdix objid + 0 17 + 1 4 + + The debugfs tool is provided in the e2fsprogs package. It can be used for interactive debugging of an ldiskfs file system. The debugfs tool can either be used to check status or modify information in the file system. In Lustre, all objects that belong to a file are stored in an underlying ldiskfs file system on the OSTs. The file system uses the object IDs as the file names. Once the object IDs are known, use the debugfs tool to obtain the attributes of all objects from different OSTs. + A sample run for the /mnt/lustre/frog file used in the above example is shown here: + $ debugfs -c /tmp/ost1 + debugfs: cd O + debugfs: cd 0 /* for files in group 0 \ */ - debugfs: cd d<objid % 32> - debugfs: stat <objid> /* for getattr on object\ + debugfs: cd d<objid % 32> + debugfs: stat <objid> /* for getattr on object\ */ - debugfs: quit -## Suppose object id is 36, then follow the steps below: - $ debugfs /tmp/ost1 - debugfs: cd O - debugfs: cd 0 - debugfs: cd d4 /* objid % 32 */ - debugfs: stat 36 /* for getattr on obj 4*\ + debugfs: quit +## Suppose object id is 36, then follow the steps below: + $ debugfs /tmp/ost1 + debugfs: cd O + debugfs: cd 0 + debugfs: cd d4 /* objid % 32 */ + debugfs: stat 36 /* for getattr on obj 4*\ / - debugfs: dump 36 /tmp/obj.36 /* dump contents of obj \ + debugfs: dump 36 /tmp/obj.36 /* dump contents of obj \ 4 */ - debugfs: quit + debugfs: quit
- <anchor xml:id="dbdoclet.50438274_pgfId-1296008" xreflabel=""/>28.2.7 Finding the Lustre <anchor xml:id="dbdoclet.50438274_marker-1296007" xreflabel=""/>UUID of an OST - To determine the Lustre UUID of an obdfilter disk (for example, if you mix up the cables on your OST devices or the SCSI bus numbering suddenly changes and the SCSI devices get new names), use debugfs to get the last_rcvd file. + 28.2.7 Finding the Lustre <anchor xml:id="dbdoclet.50438274_marker-1296007" xreflabel=""/>UUID of an OST + To determine the Lustre UUID of an obdfilter disk (for example, if you mix up the cables on your OST devices or the SCSI bus numbering suddenly changes and the SCSI devices get new names), use debugfs to get the last_rcvd file.
- <anchor xml:id="dbdoclet.50438274_pgfId-1296010" xreflabel=""/>28.2.8 Printing Debug Messages to the Console - To dump debug messages to the console (/var/log/messages), set the corresponding debug mask in the printk flag: - sysctl -w lnet.printk=-1 + 28.2.8 Printing Debug Messages to the Console + To dump debug messages to the console (/var/log/messages), set the corresponding debug mask in the printk flag: + sysctl -w lnet.printk=-1 - This slows down the system dramatically. It is also possible to selectively enable or disable this capability for particular flags using: - sysctl -w lnet.printk=+vfstrace -sysctl -w lnet.printk=-vfstrace + This slows down the system dramatically. It is also possible to selectively enable or disable this capability for particular flags using: + sysctl -w lnet.printk=+vfstrace +sysctl -w lnet.printk=-vfstrace - It is possible to disable warning, error , and console messages, though it is strongly recommended to have something like lctldebug_daemon runing to capture this data to a local file system for debugging purposes. + It is possible to disable warning, error , and console messages, though it is strongly recommended to have something like lctldebug_daemon runing to capture this data to a local file system for debugging purposes.
- <anchor xml:id="dbdoclet.50438274_pgfId-1296018" xreflabel=""/>28.2.9 Tracing <anchor xml:id="dbdoclet.50438274_marker-1296017" xreflabel=""/>Lock Traffic - Lustre has a specific debug type category for tracing lock traffic. Use: - lctl> filter all_types -lctl> show dlmtrace -lctl> debug_kernel [filename] + 28.2.9 Tracing <anchor xml:id="dbdoclet.50438274_marker-1296017" xreflabel=""/>Lock Traffic + Lustre has a specific debug type category for tracing lock traffic. Use: + lctl> filter all_types +lctl> show dlmtrace +lctl> debug_kernel [filename]
28.3 Lustre Debugging for Developers - The procedures in this section may be useful to developers debugging Lustre code. + The procedures in this section may be useful to developers debugging Lustre code.
- <anchor xml:id="dbdoclet.50438274_pgfId-1296027" xreflabel=""/>28.3.1 Adding Debugging to the <anchor xml:id="dbdoclet.50438274_marker-1296026" xreflabel=""/>Lustre Source Code - The debugging infrastructure provides a number of macros that can be used in Lustre source code to aid in debugging or reporting serious errors. - To use these macros, you will need to set the DEBUG_SUBSYSTEM variable at the top of the file as shown below: - #define DEBUG_SUBSYSTEM S_PORTALS + 28.3.1 Adding Debugging to the <anchor xml:id="dbdoclet.50438274_marker-1296026" xreflabel=""/>Lustre Source Code + The debugging infrastructure provides a number of macros that can be used in Lustre source code to aid in debugging or reporting serious errors. + To use these macros, you will need to set the DEBUG_SUBSYSTEM variable at the top of the file as shown below: + #define DEBUG_SUBSYSTEM S_PORTALS - A list of available macros with descritions is provided in the table below. + A list of available macros with descritions is provided in the table below. - Macro - Description + Macro + Description - LBUG - A panic-style assertion in the kernel which causes Lustre to dump its circular log to the /tmp/lustre-log file. This file can be retrieved after a reboot. LBUG freezes the thread to allow capture of the panic stack. A system reboot is needed to clear the thread. + LBUG + A panic-style assertion in the kernel which causes Lustre to dump its circular log to the /tmp/lustre-log file. This file can be retrieved after a reboot. LBUG freezes the thread to allow capture of the panic stack. A system reboot is needed to clear the thread. - LASSERT - Validates a given expression as true, otherwise calls LBUG. The failed expression is printed on the console, although the values that make up the expression are not printed. + LASSERT + Validates a given expression as true, otherwise calls LBUG. The failed expression is printed on the console, although the values that make up the expression are not printed. - LASSERTF - Similar to LASSERT but allows a free-format message to be printed, like printf/printk. + LASSERTF + Similar to LASSERT but allows a free-format message to be printed, like printf/printk. - CDEBUG - The basic, most commonly used debug macro that takes just one more argument than standard printf - the debug type. This message adds to the debug log with the debug mask set accordingly. Later, when a user retrieves the log for troubleshooting, they can filter based on this type.CDEBUG(D_INFO, "This is my debug message: the number is %d\n", number). + CDEBUG + The basic, most commonly used debug macro that takes just one more argument than standard printf - the debug type. This message adds to the debug log with the debug mask set accordingly. Later, when a user retrieves the log for troubleshooting, they can filter based on this type.CDEBUG(D_INFO, "This is my debug message: the number is %d\n", number). - CERROR - Behaves similarly to CDEBUG, but unconditionally prints the message in the debug log and to the console. This is appropriate for serious errors or fatal conditions:CERROR("Something very bad has happened, and the return code is %d.\n", rc); + CERROR + Behaves similarly to CDEBUG, but unconditionally prints the message in the debug log and to the console. This is appropriate for serious errors or fatal conditions:CERROR("Something very bad has happened, and the return code is %d.\n", rc); - ENTRY and EXIT - Add messages to aid in call tracing (takes no arguments). When using these macros, cover all exit conditions to avoid confusion when the debug log reports that a function was entered, but never exited. + ENTRY and EXIT + Add messages to aid in call tracing (takes no arguments). When using these macros, cover all exit conditions to avoid confusion when the debug log reports that a function was entered, but never exited. - LDLM_DEBUG and LDLM_DEBUG_NOLOCK - Used when tracing MDS and VFS operations for locking. These macros build a thin trace that shows the protocol exchanges between nodes. + LDLM_DEBUG and LDLM_DEBUG_NOLOCK + Used when tracing MDS and VFS operations for locking. These macros build a thin trace that shows the protocol exchanges between nodes. - DEBUG_REQ - Prints information about the given ptlrpc_request structure. + DEBUG_REQ + Prints information about the given ptlrpc_request structure. - OBD_FAIL_CHECK - Allows insertion of failure points into the Lustre code. This is useful to generate regression tests that can hit a very specific sequence of events. This works in conjunction with "sysctl -w lustre.fail_loc={fail_loc}" to set a specific failure point for which a given OBD_FAIL_CHECK will test. + OBD_FAIL_CHECK + Allows insertion of failure points into the Lustre code. This is useful to generate regression tests that can hit a very specific sequence of events. This works in conjunction with "sysctl -w lustre.fail_loc={fail_loc}" to set a specific failure point for which a given OBD_FAIL_CHECK will test. - OBD_FAIL_TIMEOUT - Similar to OBD_FAIL_CHECK. Useful to simulate hung, blocked or busy processes or network devices. If the given fail_loc is hit, OBD_FAIL_TIMEOUT waits for the specified number of seconds. + OBD_FAIL_TIMEOUT + Similar to OBD_FAIL_CHECK. Useful to simulate hung, blocked or busy processes or network devices. If the given fail_loc is hit, OBD_FAIL_TIMEOUT waits for the specified number of seconds. - OBD_RACE - Similar to OBD_FAIL_CHECK. Useful to have multiple processes execute the same code concurrently to provoke locking races. The first process to hit OBD_RACE sleeps until a second process hits OBD_RACE, then both processes continue. + OBD_RACE + Similar to OBD_FAIL_CHECK. Useful to have multiple processes execute the same code concurrently to provoke locking races. The first process to hit OBD_RACE sleeps until a second process hits OBD_RACE, then both processes continue. - OBD_FAIL_ONCE - A flag set on a lustre.fail_loc breakpoint to cause the OBD_FAIL_CHECK condition to be hit only one time. Otherwise, a fail_loc is permanent until it is cleared with "sysctl -w lustre.fail_loc=0". + OBD_FAIL_ONCE + A flag set on a lustre.fail_loc breakpoint to cause the OBD_FAIL_CHECK condition to be hit only one time. Otherwise, a fail_loc is permanent until it is cleared with "sysctl -w lustre.fail_loc=0". - OBD_FAIL_RAND - Has OBD_FAIL_CHECK fail randomly; on average every (1 / lustre.fail_val) times. + OBD_FAIL_RAND + Has OBD_FAIL_CHECK fail randomly; on average every (1 / lustre.fail_val) times. - OBD_FAIL_SKIP - Has OBD_FAIL_CHECK succeed lustre.fail_val times, and then fail permanently or once with OBD_FAIL_ONCE. + OBD_FAIL_SKIP + Has OBD_FAIL_CHECK succeed lustre.fail_val times, and then fail permanently or once with OBD_FAIL_ONCE. - OBD_FAIL_SOME - Has OBD_FAIL_CHECK fail lustre.fail_val times, and then succeed. + OBD_FAIL_SOME + Has OBD_FAIL_CHECK fail lustre.fail_val times, and then succeed.
- <anchor xml:id="dbdoclet.50438274_pgfId-1296100" xreflabel=""/>28.3.2 Accessing a Ptlrpc <anchor xml:id="dbdoclet.50438274_marker-1296099" xreflabel=""/>Request History - Each service maintains a request history, which can be useful for first occurrence troubleshooting. - Ptlrpc is an RPC protocol layered on LNET that deals with stateful servers and has semantics and built-in support for recovery. - A prlrpc request history works as follows: + 28.3.2 Accessing a Ptlrpc <anchor xml:id="dbdoclet.50438274_marker-1296099" xreflabel=""/>Request History + Each service maintains a request history, which can be useful for first occurrence troubleshooting. + Ptlrpc is an RPC protocol layered on LNET that deals with stateful servers and has semantics and built-in support for recovery. + A prlrpc request history works as follows: - Request_in_callback() adds the new request to the service's request history. + Request_in_callback() adds the new request to the service's request history. - When a request buffer becomes idle, it is added to the service's request buffer history list. + When a request buffer becomes idle, it is added to the service's request buffer history list. - Buffers are culled from the service's request buffer history if it has grown above - req_buffer_history_max and its reqs are removed from the service's request history. + Buffers are culled from the service's request buffer history if it has grown above + req_buffer_history_max and its reqs are removed from the service's request history. - Request history is accessed and controlled using the following /proc files under the service directory: + Request history is accessed and controlled using the following /proc files under the service directory: - req_buffer_history_len + req_buffer_history_len - Number of request buffers currently in the history + Number of request buffers currently in the history - req_buffer_history_max + req_buffer_history_max - Maximum number of request buffers to keep + Maximum number of request buffers to keep - req_history + req_history - The request history - Requests in the history include "live" requests that are currently being handled. Each line in req_history looks like: - <seq>:<target NID>:<client ID>:<xid>:<length>:<phase> <svc specific> + The request history + Requests in the history include "live" requests that are currently being handled. Each line in req_history looks like: + <seq>:<target NID>:<client ID>:<xid>:<length>:<phase> <svc specific> @@ -647,74 +647,74 @@ - Parameter - Description + Parameter + Description - seq - Request sequence number + seq + Request sequence number - target NID - Destination NID of the incoming request + target NID + Destination NID of the incoming request - client ID - Client PID and NID + client ID + Client PID and NID - xid - rq_xid + xid + rq_xid - length - Size of the request message + length + Size of the request message - phase + phase - New (waiting to be handled or could not be unpacked) + New (waiting to be handled or could not be unpacked) - Interpret (unpacked or being handled) + Interpret (unpacked or being handled) - Complete (handled) + Complete (handled) - svc specific - Service-specific request printout. Currently, the only service that does this is the OST (which prints the opcode if the message has been unpacked successfully + svc specific + Service-specific request printout. Currently, the only service that does this is the OST (which prints the opcode if the message has been unpacked successfully
- <anchor xml:id="dbdoclet.50438274_pgfId-1296154" xreflabel=""/>28.3.3 Finding Memory <anchor xml:id="dbdoclet.50438274_marker-1296153" xreflabel=""/>Leaks Using leak_finder.pl - Memory leaks can occur in code when memory has been allocated and then not freed once it is no longer required. The leak_finder.pl program provides a way to find memory leaks. - Before running this program, you must turn on debugging to collect all malloc and free entries. Run: - sysctl -w lnet.debug=+malloc + 28.3.3 Finding Memory <anchor xml:id="dbdoclet.50438274_marker-1296153" xreflabel=""/>Leaks Using leak_finder.pl + Memory leaks can occur in code when memory has been allocated and then not freed once it is no longer required. The leak_finder.pl program provides a way to find memory leaks. + Before running this program, you must turn on debugging to collect all malloc and free entries. Run: + sysctl -w lnet.debug=+malloc - Then complete the following steps: + Then complete the following steps: - 1. Dump the log into a user-specified log file using lctl (see Using the lctl Tool to View Debug Messages). + 1. Dump the log into a user-specified log file using lctl (see Using the lctl Tool to View Debug Messages). - 2. Run the leak finder on the newly-created log dump: - perl leak_finder.pl <ascii-logname> + 2. Run the leak finder on the newly-created log dump: + perl leak_finder.pl <ascii-logname> - The output is: - malloced 8bytes at a3116744 (called pathcopy) -(lprocfs_status.c:lprocfs_add_vars:80) -freed 8bytes at a3116744 (called pathcopy) -(lprocfs_status.c:lprocfs_add_vars:80) - - The tool displays the following output to show the leaks found: - Leak:32bytes allocated at a23a8fc(service.c:ptlrpc_init_svc:144,debug file \ + The output is: + malloced 8bytes at a3116744 (called pathcopy) +(lprocfs_status.c:lprocfs_add_vars:80) +freed 8bytes at a3116744 (called pathcopy) +(lprocfs_status.c:lprocfs_add_vars:80) + + The tool displays the following output to show the leaks found: + Leak:32bytes allocated at a23a8fc(service.c:ptlrpc_init_svc:144,debug file \ line 241)