LU-14535 quota: get all quota info in LFS This patch adds option "-a" for LFS to get the quota info of all quota IDs. it iterates quota setting saved in global quota setting files "quota_master/md-0x0" and "quota_master/dt-0x0" from QMT and iterates the quota usage info saved in acct quota files in the backend FS (LDiskFS or ZFS) from QSDs, then merge the two kinds of quota info at client and print it in the similar way as "lfs quota -u|-g|-p". $lfs quota -a -u /mnt/lustre Filesystem /mnt/lustre, Disk usr quotas quota_id kbytes quota limit grace files quota limit grace root 9684 0 0 - 1019 0 0 - bin 4 0 102400 - 1 0 10240 - daemon 4 0 102400 - 1 0 10240 - adm 4 0 102400 - 1 0 10240 - lp 4 0 102400 - 1 0 10240 - sync 4 0 102400 - 1 0 10240 - shutdown 4 0 102400 - 1 0 10240 - halt 4 0 102400 - 1 0 10240 - mail 4 0 102400 - 1 0 10240 - $lfs quota -a -g /mnt/lustre Filesystem /mnt/lustre, Disk grp quotas quota_id kbytes quota limit grace files quota limit grace root 9684 0 0 - 1019 0 0 - bin 4 0 204800 - 1 0 20480 - daemon 4 0 204800 - 1 0 20480 - adm 4 0 204800 - 1 0 20480 - lp 4 0 204800 - 1 0 20480 - sync 4 0 204800 - 1 0 20480 - shutdown 4 0 204800 - 1 0 20480 - halt 4 0 204800 - 1 0 20480 - mail 4 0 204800 - 1 0 20480 - This patch also fixes an deadlock issue in qmt_pool_recalc, the rw_semaphore "qmt_pool_info.qpi_sarr.osts.op_rw_sem" has been acquired in qmt_pool_recalc (read mode), but it was acquired once more in qmt_seed_glbe_all (read mode) and will be stuck if there is a pending write mode lock acquisition from another thread. qsd_reint_qpool D Call Trace: schedule+0x29/0x70 rwsem_down_read_failed+0x105/0x1c0 call_rwsem_down_read_failed+0x18/0x30 down_read+0x20/0x40 qmt_seed_glbe_all+0x3a0/0x800 [lquota] qmt_site_recalc_cb+0x3c7/0x800 [lquota] cfs_hash_for_each_tight+0x11e/0x330 cfs_hash_for_each+0x10/0x20 [libcfs] qmt_pool_recalc+0x9fc/0x1310 [lquota] llog_process_th D Call Trace: schedule+0x29/0x70 rwsem_down_write_failed+0x215/0x3c0 call_rwsem_down_write_failed+0x17/0x30 down_write+0x2d/0x3d lu_tgt_pool_remove+0x36/0x1e0 [obdclass] qmt_pool_add_rem+0x655/0x920 [lquota] qmt_pool_rem+0x10/0x20 [lquota] lod_pool_remove_q+0xd6/0x1d0 [lod] class_process_config+0x16f2/0x2b20 class_config_llog_handler+0x839/0x1540 llog_process_thread+0x913/0x1c10 llog_process_thread_daemonize+0x9f/0xe0 Test-Parameters: testlist=sanity-quota env=SLOW=yes,ONLY=49,NUM_QIDS=20000 Change-Id: I08feb928fbf34635ec9c5c341de993c718798dc9 Signed-off-by: Hongchao Zhang <hongchao@whamcloud.com> Reviewed-on: https://review.whamcloud.com/c/fs/lustre-release/+/42098 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: James Simmons <jsimmons@infradead.org> Reviewed-by: Sergey Cheremencev <scherementsev@ddn.com> Reviewed-by: Oleg Drokin <green@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
LU-14139 statahead: batched statahead processing Batched metadata processing can get a big performance boost. In this patch, it implements a batched statahead mechanism which can also increase the performance for a directory traverse or listing such as the command 'ls'. For the batched statahead, one batch getattr() RPC equals to 'N' normal lookup/getattr RPCs. It can pack a number of dentry name getting from the readdir() call and prepared lock handles one client side lock namespace into one large batched RPC transfering via bulk I/O to obtain ibits DLM locks and associated attributes for a lot of files in one blow. When MDS receives a batched getattr() RPC, it executes the sub requests in it one by one serially. A tunable parameter named "statahead_batch_max" is defined, it means the maximal items can be batched and processed within one aggregate RPC. Once the number of sub requests exceeds this predefined limit, it will pack and trigger the batched RPC. The batched RPC will also be triggered explictly when the readdir() call comes to the end position of the directory or the statahead thread exits abnormally. Batched metadata processing can get a big performance boost. The mdtest performance results without/with this patch series are as follow: mdtest-easy-stat 720.562369 kIOPS : time 118.695 seconds mdtest-easy-stat 1218.290192 kIOPS : time 70.656 seconds In this patch, we set statahead_batch_max=0 and disabled batched statahead by default. It will enable accordingly once some subsequent fixes about batched RPC have been merged. Signed-off-by: Qian Yingjin <qian@ddn.com> Change-Id: I5a80c2c377093dc8b8e21341f440e3038f017ca8 Reviewed-on: https://review.whamcloud.com/c/fs/lustre-release/+/40720 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Alex Zhuravlev <bzzz@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
LU-14393 protocol: basic batching processing framework Batching processing can obtain boost performace. The larger the batch size, the higher the latency for the entire batch. Although the latency for the entire batch of operations is higher than the latency of any single operation, the throughput of the batch of operations is much high. This patch implements the basic batching processing framework for Lustre. It could be used for the future batching statahead and WBC. A batched RPC does not require that the opcodes of sub requests in a batch are same. Each sub request has its own opcode. It allows batching not only read-only requests but also multiple modification updates with different opcodes, and even a mixed workload which contains both read-only requests and modification updates. For the recovery, only the batched RPC contains a client XID, there is no separate client XID for each sub-request. Although the server will generate a transno for each update sub request, but the transno only stores into the batched RPC (in @ptlrpc_body) when the sub update request is finished. Thus the batched RPC only stores the transno of the last sub update request. Only the batched RPC contains the @ptlrpc_body message field. Each sub request in a batched RPC does not contain @ptlrpc_body field. A new field named @lrd_batch_idx is added in the client reply data @lsd_reply_data. It indicates the sub request index in a batched RPC. When the server finished a sub update request, it will update @lrd_batch_idx accordingly. When found that a batched RPC was a resend RPC, and if the index of the sub request in the batched RPC is smaller or equal than @lrd_batch_idx in the reply data, it means that the sub request has already executed and committed, the server will reconstruct the reply for this sub request; if the index is larger than @lrd_batch_idx, the server will re-execute the sub request in the batched RPC. To simplify the reply/resend of the batched RPCs, the batch processing stops at the first failure in the current design. Signed-off-by: Qian Yingjin <qian@ddn.com> Change-Id: Idaa814e82c968811bdda1c750b18c878b2c2ca67 Reviewed-on: https://review.whamcloud.com/c/fs/lustre-release/+/41378 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Mikhail Pershin <mpershin@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Alex Zhuravlev <bzzz@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-14138 ptlrpc: move more members in PTLRPC request into pill Some data members in the data structure @ptlrpc_request can be moved into the data structure @rep_capsule: /** Request message - what client sent */ struct lustre_msg *rq_reqmsg; /** Reply message - server response */ struct lustre_msg *rq_repmsg; /** Fields that help to see if request and reply were swabbed */ __u32 rq_req_swab_mask; __u32 rq_rep_swab_mask; After these data structures are reconstructed, @rep_capsule can be more common used and it makes pack and unpack sub requests in a batch PtlRPC request for the coming batch metadata processing more easily. Signed-off-by: Qian Yingjin <qian@ddn.com> Change-Id: Ib6d942b79ebf1a444d63b55ad4bc94813cf947c7 Reviewed-on: https://review.whamcloud.com/40669 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Alexey Lyashkov <alexey.lyashkov@hpe.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: James Simmons <jsimmons@infradead.org>
LU-14487 lustre: remove references to Sun Trademark. "lustre" is no longer a Trademark of Sun Microsystems. There is no need to acknowledge the trademark is every file, so just remove all these claims. Test-Parameters: trivial Signed-off-by: Mr NeilBrown <neilb@suse.de> Change-Id: I214670b39c5718f2b691193f268a64856e0cd743 Reviewed-on: https://review.whamcloud.com/41880 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: James Simmons <jsimmons@infradead.org>
LU-14291 ptlrpc: format UPDATE messages in server-only code There are some ptlrpc messages that are only used for targets to communicate with each other: Object Updates between Targets (OUT). These are never needed by the client, so the code for handling them can be conditionally compiled with HAVE_SERVER_SUPPORT. The code in layout.c needs struct declaration that are in the file, so group them at the end of the file and add #ifdef. The code in pack_generic.c can stand alone, so move it to a new pack_server.c and compile that only when server code is requested. For simplicity, also make req_check_sepol() completely server-side and provide an inline stub for client-only code. Test-Parameters: trivial Signed-off-by: Mr NeilBrown <neilb@suse.de> Change-Id: I788352575a2109df389760fff45207ad6de3391b Reviewed-on: https://review.whamcloud.com/41125 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: James Simmons <jsimmons@infradead.org> Reviewed-by: Sebastien Buisson <sbuisson@ddn.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-10810 ptlrpc: introduce OST_SEEK RPC For the purposes of SEEK_HOLE/SEEK_DATA support introduce new OST_SEEK RPC. Patch add RPC layout, unified handler and connect flag for compatibility needs. Signed-off-by: Mikhail Pershin <mpershin@whamcloud.com> Change-Id: I1580902b6b773d9a6d6f9beaa1ee1da60fbc20f8 Reviewed-on: https://review.whamcloud.com/39707 Reviewed-by: Sebastien Buisson <sbuisson@ddn.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-12275 sec: atomicity of encryption context getting/setting Encryption layer needs to set an encryption context on files and dirs that are encrypted. This context is stored as an extended attribute, that then needs to be fetched upon metadata ops like lookup, getattr, open, truncate, and layout. With this patch we send encryption context to the MDT along with create RPCs. This closes the insecure window between creation and setting of the encryption context, and saves a setxattr request. This patch also introduces a way to have the MDT return encryption context upon granted lock reply, making the encryption context retrieval atomic, and sparing the client an additional getxattr request. Test-Parameters: testlist=sanity-sec envdefinitions=ONLY="36 37 38 39 40 41 42 43 44 45 46 47 48 49" clientdistro=el8.1 fstype=ldiskfs mdscount=2 mdtcount=4 Test-Parameters: testlist=sanity-sec envdefinitions=ONLY="36 37 38 39 40 41 42 43 44 45 46 47 48 49" clientdistro=el8.1 fstype=zfs mdscount=2 mdtcount=4 Test-Parameters: clientversion=2.12 env=SANITY_EXCEPT="27M 56ra 151 156 802" Test-Parameters: serverversion=2.12 env=SANITY_EXCEPT="56oc 56od 165a 165b 165d 205b" Test-Parameters: serverversion=2.12 clientdistro=el8.1 env=SANITYN_EXCEPT=106,SANITY_EXCEPT="56oc 56od 165a 165b 165d 205b" Signed-off-by: Sebastien Buisson <sbuisson@ddn.com> Change-Id: I45599cdff13d5587103aff6edd699abcda6cb8f4 Reviewed-on: https://review.whamcloud.com/38430 Tested-by: jenkins <devops@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Mike Pershin <mpershin@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-3606 fallocate: Implement fallocate preallocate operation This patch adds fallocate(2) preallocate operation support for Lustre. fallocate(2) method of the inode_operations or file_operations is implemented and transported to the OSTs to interface with the underlying OSD's fallocate(2) code. In a saperate patch, a new RPC, OST_FALLOCATE has been added and reserved for space preallocation. The fallocate functionality (prealloc) in CLIO has been multiplexed with CIT_SETATTR. (https://review.whamcloud.com/37277) Lustre fsx(File system exerciser) is updated in a saperate patch to handle fallocate calls. (https://review.whamcloud.com/37277) Only fallocate preallocate operation is supported by this patch for now. Other operations like, FALLOC_FL_PUNCH (deallocate), FALLOC_FL_ZERO_RANGE, FALLOC_FL_COLLAPSE_RANGE and FALLOC_FL_INSPECT_RANGE is not supported by this patch and will be addressed by a separate patch. ZFS operation is not supported by this patch. ZFS fallocate(2) will be addressed by patch (https://review.whamcloud.com/36506/) New test case under sanity is added to verify fallocate call. Test-Parameters: fstype=ldiskfs testlist=sanity,sanityn,sanity-dom Signed-off-by: Swapnil Pimpale <spimpale@ddn.com> Signed-off-by: Li Xi <lixi@ddn.com> Signed-off-by: Abrarahmed Momin <abrar.momin@gmail.com> Signed-off-by: Arshad Hussain <arshad.super@gmail.com> Change-Id: I03f27d356616fbf3a3ab8e6309af26c00434d81b Reviewed-on: https://review.whamcloud.com/9275 Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Wang Shilong <wshilong@ddn.com> Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-11023 quota: quota pools for OSTs Patch allows to apply quota settings not only for the whole system, but also for different OST pools. Since this patch each "LOD" pool is duplicated by QMT. Thus quota pools(QP) could be tuned by standard lctl pool_new/add/remove/erase commands. All QPs are subset of a global pool that includes all data devices in a system, including DOM. However DOM is not supported. I don't see a lot of work to add DOM support in future - just need to decide how MDTs could be combined in a pool. The main idea of QP is to find all pools for requested ID(usr/grp/prj) and apply minimum limit. The patch doesn't affect qsd side, so slaves know nothing about pools and different limits. Qunit and edquot are calculated for each slave on master. To apply quota on QP, the patch adds key "-o" to lfs setquota. To get quotas for QP, it provides long option "--pool" in lfs quota. See examples of using in sanity-quota_1b/c/d. Now QPs work properly only on a clean system. Support of recalculation granted space in case of adding/removing OSTs in a pool will be added in the next patch together with accounting already granted space by each ID in a POOl. Test-Parameters: testgroup=review-dne-part-4 Change-Id: I3396aded2156729b4fd15166eb59db59ee4c967e Signed-off-by: Sergey Cheremencev <c17829@cray.com> Reviewed-on: https://review.whamcloud.com/35615 Tested-by: jenkins <devops@whamcloud.com> Reviewed-by: James Simmons <jsimmons@infradead.org> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Shaun Tancheff <shaun.tancheff@hpe.com> Reviewed-by: Hongchao Zhang <hongchao@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-13064 sec: check permissions for changelogs access root permissions should be checked when reading or clearing changelogs from clients. In particular, if root is squashed via a nodemap entry, it should not be allowed to access changelogs. To achieve this send mdt body along with RQF_LLOG_ORIGIN_HANDLE_CREATE and RQF_MDT_SET_INFO requests. And on server side, retrieve user credentials and make sure they have root permission. Test-Parameters: clientversion=2.12 envdefinitions=SANITY_EXCEPT="27M 56ra 151 156 802" Test-Parameters: serverversion=2.12 Signed-off-by: Sebastien Buisson <sbuisson@ddn.com> Change-Id: I0c6cc99f8a7c5a13c2b31009d73f38976931ec37 Reviewed-on: https://review.whamcloud.com/36990 Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Emoly Liu <emoly@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-12090 utils: lfs rmfid a new RPC_REINT_RMFID has been introduced by the patch. it's supposed to be used with corresponding llapi_rmfid() to unlink a batch of MDS files by their FIDs. the caller has to have permission to modify parent dir(s) and the objects themselves. Change-Id: I50421d85babc74d448842acea489321a5d40052d Signed-off-by: Alex Zhuravlev <bzzz@whamcloud.com> Reviewed-on: https://review.whamcloud.com/34449 Reviewed-by: Li Xi <lixi@ddn.com> Tested-by: jenkins <devops@whamcloud.com> Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Patrick Farrell <pfarrell@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-11213 ptlrpc: intent_getattr fetches default LMV Intent_getattr fetches default LMV, and caches it on client, which will be used in subdir creation. * Add RMF_DEFAULT_MDT_MD in intent_getattr reply. * Save default LMV in ll_inode_info->lli_default_lsm_md, and replace lli_def_stripe_offset with it. * take LOOKUP lock on default LMV setting to let client update cached default LMV. * improve mdt_object_striped() to read from bottom device to avoid reading stripe FIDs. Signed-off-by: Lai Siyao <lai.siyao@whamcloud.com> Change-Id: Idb369db2c514a9c5108390f70d9284b3a87d26db Reviewed-on: https://review.whamcloud.com/34802 Tested-by: Jenkins Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Hongchao Zhang <hongchao@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-8955 ptlrpc: manage SELinux policy info for metadata ops Add SELinux policy info for following metedata operations: - create - open - unlink - rename - getxattr - setxattr - setattr - getattr - symlink - hardlink On server side, get SELinux policy info from nodemap and compare it with the one received from client. Test-Parameters: serverbuildno=62488 serverjob=lustre-reviews testlist=sanity,sanity-selinux clientselinux Test-Parameters: clientbuildno=4033 clientjob=lustre-reviews-patchless testlist=sanity,sanity-selinux clientselinux Signed-off-by: Sebastien Buisson <sbuisson@ddn.com> Change-Id: I16493d7c5713180fb065623b735d7348fc3f9140 Reviewed-on: https://review.whamcloud.com/24424 Reviewed-by: Patrick Farrell <pfarrell@whamcloud.com> Reviewed-by: Li Dongyang <dongyangli@ddn.com> Tested-by: Jenkins Tested-by: Maloo <maloo@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-8955 ptlrpc: manage SELinux policy info at connect time At connect time, compute SELinux policy info on client side, and send it over the wire. On server side, get SELinux policy info from nodemap and compare it with the one received from client. Signed-off-by: Sebastien Buisson <sbuisson@ddn.com> Change-Id: I9b4a206455f2c0b451f6b3ed7e3a85285592758e Reviewed-on: https://review.whamcloud.com/24422 Reviewed-by: Patrick Farrell <pfarrell@whamcloud.com> Reviewed-by: Li Dongyang <dongyangli@ddn.com> Reviewed-by: Oleg Drokin <green@whamcloud.com> Tested-by: Jenkins Tested-by: Maloo <maloo@whamcloud.com>
LU-11375 mdc: use old statfs format when the client talks to old server with no support for aggregated statfs Test-Parameters: clientjob=lustre-b2_10 clientbuildno=136 testgroup=review-ldiskfs Change-Id: I447b312d46db56da152f62835b3f98401f997cf0 Signed-off-by: Alex Zhuravlev <bzzz@whamcloud.com> Reviewed-on: https://review.whamcloud.com/33162 Tested-by: Jenkins Tested-by: Maloo <hpdd-maloo@intel.com> Tested-by: James Nunez <jnunez@whamcloud.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: James Simmons <uja.ornl@yahoo.com> Tested-by: James Simmons <uja.ornl@yahoo.com>
LU-11014 mdc: remove obsolete intent opcodes In enum ldlm_intent_flags, remove the obsolete constants IT_UNLINK, IT_TRUNC, IT_EXEC, IT_PIN, IT_SETXATTR. Remove any handling code for these opcodes. Signed-off-by: John L. Hammond <john.hammond@intel.com> Change-Id: I66f20e4c881cb77a481805a148a33f1c2daa5f0c Reviewed-on: https://review.whamcloud.com/32361 Reviewed-by: Fan Yong <fan.yong@intel.com> Tested-by: Jenkins Tested-by: Maloo <hpdd-maloo@intel.com> Reviewed-by: Mike Pershin <mpershin@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-10181 mdt: read on open for DoM files Read file data upon open and return it in reply. That works only for file with Data-on-MDT layout and no OST components initialized. There are three possible cases may occur: 1) file data fits in already allocated reply buffer (~9K) and is returned in that buffer in OPEN reply. 2) File fits in the maximum reply buffer (128K) and reply is returned with larger size to the client causing resend with re-allocated buffer. 3) File doesn't fit in reply buffer but its tail fills page partially then that tail is returned. This can be useful for an append case Test-Parameters: mdssizegb=20 testlist=sanity-dom,dom-performance,racer Change-Id: I5574ce5f74017fc654715e212b71fc3b905bdcae Signed-off-by: Mikhail Pershin <mike.pershin@intel.com> Reviewed-on: https://review.whamcloud.com/23011 Tested-by: Jenkins Tested-by: Maloo <hpdd-maloo@intel.com> Reviewed-by: Andreas Dilger <adilger@whamcloud.com> Reviewed-by: Lai Siyao <lai.siyao@whamcloud.com> Reviewed-by: Oleg Drokin <green@whamcloud.com>
LU-10855 llog: remove obsolete llog handlers Remove the obsolete llog RPC handling for cancel, close, and destroy. Remove llog handling from ldlm_callback_handler(). Remove the unused client side method llog_client_destroy(). Signed-off-by: John L. Hammond <john.hammond@intel.com> Change-Id: Ieab44f3796971a7d3c65d6044e4c0be4afb4b508 Reviewed-on: https://review.whamcloud.com/32202 Tested-by: Jenkins Tested-by: Maloo <hpdd-maloo@intel.com> Reviewed-by: Mike Pershin <mike.pershin@intel.com> Reviewed-by: Sebastien Buisson <sbuisson@ddn.com> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
LU-10308 misc: update Intel copyright messages for 2017 Update copyright messages for files updated in 2016, excluding trivial patches. Add trivial patches to updatecw.sh script exclude list. Revert some changes that were incorrectly attributed to the 2016 (d10200a80770f0029d1d665af954187b9ad883df) and 2015 (0754bc8f2623bea184111af216f7567608db35b6) copyright update patches themselves, since they were not in the exclude list when the subsequent script was run. Test-Parameters: trivial Signed-off-by: Andreas Dilger <andreas.dilger@intel.com> Change-Id: I82f21c30c4dac75792bb49fc139bee2ca51f5545 Reviewed-on: https://review.whamcloud.com/30341 Tested-by: Jenkins Tested-by: Maloo <hpdd-maloo@intel.com> Reviewed-by: Jian Yu <jian.yu@intel.com> Reviewed-by: James Nunez <james.a.nunez@intel.com> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>