FROMLIST: Update Inline Encryption from v6 to upstream version of patch series
The block layer patches for inline encryption are now in upstream, so
update Android to the upstream version of inline encryption. The
fscrypt/f2fs/ext4 patches are also updated to the latest version sent
upstream (since they can't be updated separately from the block layer
patches).
Changes v6 => v7:
- Keyslot management is now done on a per-request basis rather than a
per-bio basis.
- Storage drivers can now specify the maximum number of bytes they
can accept for the data unit number (DUN) for each crypto algorithm,
and upper layers can specify the minimum number of bytes of DUN they
want with the blk_crypto_key they send with the bio - a driver is
only considered to support a blk_crypto_key if the driver supports at
least as many DUN bytes as the upper layer wants. This is necessary
because storage drivers may not support as many bytes as the
algorithm specification dictates (for e.g. UFS only supports 8 byte
DUNs for AES-256-XTS, even though the algorithm specification
says DUNs are 16 bytes long).
- Introduce SB_INLINECRYPT to keep track of whether inline encryption
is enabled for a filesystem (instead of using an fscrypt_operation).
- Expose keyslot manager declaration and embed it within ufs_hba to
clean up code.
- Make blk-crypto preclude blk-integrity.
- Some bug fixes
- Introduce UFSHCD_QUIRK_BROKEN_CRYPTO for UFS drivers that don't
support inline encryption (yet)
Changes v7 => v8:
- Pass a struct blk_ksm_keyslot * around instead of slot numbers which
simplifies some functions and passes around arguments with better types
- Make bios with no encryption context avoid making calls into blk-crypto
by checking for the presence of bi_crypt_context before making the call
- Make blk-integrity preclude inline encryption support at probe time
- Many many cleanups
Changes v8 => v9:
- Don't open code bio_has_crypt_ctx into callers of blk-crypto functions.
- Lots of cleanups
Changes v9 => v10:
- Incorporate Eric's fix for allowing en/decryption to happen as usual via
fscrypt in the case that hardware doesn't support the desired crypto
configuration, but blk-crypto-fallback is disabled. (Introduce
struct blk_crypto_config and blk_crypto_config_supported for fscrypt
to call, to check that either blk-crypto-fallback is enabled or the
device supports the crypto configuration).
- Update docs
- Lots of cleanups
Changes v10 => v11:
- We now allocate a new bio_crypt_ctx for each request instead of
pulling and reusing the one in the bio inserted into the request. The
bio_crypt_ctx of a bio is freed after the bio is ended.
- Make each blk_ksm_keyslot store a pointer to the blk_crypto_key
instead of a copy of the blk_crypto_key, so that each blk_crypto_key
will have its own keyslot. We also won't need to compute the siphash
for a blk_crypto_key anymore.
- Minor cleanups
Changes v11 => v12:
- Inlined some fscrypt functions
- Minor cleanups and improved comments
Changes v12 => v13:
- Updated docs
- Minor cleanups
- rebased onto linux-block/for-next
Changes v13 => fscrypt/f2fs/ext4 upstream patch series
- rename struct fscrypt_info::ci_key to ci_enc_key
- set dun bytes more precisely in fscrypt
- cleanups
Bug: 137270441
Test: Test cuttlefish boots both with and without inlinecrypt mount
option specified in fstab, while using both F2FS and EXT4 for
userdata.img. Also verified ciphertext via
"atest -v vts_kernel_encryption_test"
Also tested by running gce-xfstests on both the
auto and encrypt test groups on EXT4 and F2FS both with and
without the inlinecrypt mount option. The UFS changes were
tested on a Pixel 4 device.
Link: https://lore.kernel.org/linux-block/[email protected]/
Link: https://lore.kernel.org/linux-fscrypt/[email protected]/
Link: https://lore.kernel.org/linux-scsi/[email protected]/
Change-Id: I57c10d370bf006c9dfcf173f21a720413017761e
Signed-off-by: Satya Tangirala <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst
index 9443fce..ed997e3 100644
--- a/Documentation/admin-guide/ext4.rst
+++ b/Documentation/admin-guide/ext4.rst
@@ -395,6 +395,12 @@
Documentation/filesystems/dax.txt. Note that this option is
incompatible with data=journal.
+ inlinecrypt
+ Encrypt/decrypt the contents of encrypted files using the blk-crypto
+ framework rather than filesystem-layer encryption. This allows the use
+ of inline encryption hardware. The on-disk format is unaffected. For
+ more details, see Documentation/block/inline-encryption.rst.
+
Data Mode
=========
There are 3 different data modes:
diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
index 330106b..354817b 100644
--- a/Documentation/block/inline-encryption.rst
+++ b/Documentation/block/inline-encryption.rst
@@ -4,6 +4,22 @@
Inline Encryption
=================
+Background
+==========
+
+Inline encryption hardware sits logically between memory and the disk, and can
+en/decrypt data as it goes in/out of the disk. Inline encryption hardware has a
+fixed number of "keyslots" - slots into which encryption contexts (i.e. the
+encryption key, encryption algorithm, data unit size) can be programmed by the
+kernel at any time. Each request sent to the disk can be tagged with the index
+of a keyslot (and also a data unit number to act as an encryption tweak), and
+the inline encryption hardware will en/decrypt the data in the request with the
+encryption context programmed into that keyslot. This is very different from
+full disk encryption solutions like self encrypting drives/TCG OPAL/ATA
+Security standards, since with inline encryption, any block on disk could be
+encrypted with any encryption context the kernel chooses.
+
+
Objective
=========
@@ -18,27 +34,28 @@
Constraints and notes
=====================
-- IE hardware have a limited number of "keyslots" that can be programmed
+- IE hardware has a limited number of "keyslots" that can be programmed
with an encryption context (key, algorithm, data unit size, etc.) at any time.
One can specify a keyslot in a data request made to the device, and the
device will en/decrypt the data using the encryption context programmed into
that specified keyslot. When possible, we want to make multiple requests with
the same encryption context share the same keyslot.
-- We need a way for filesystems to specify an encryption context to use for
- en/decrypting a struct bio, and a device driver (like UFS) needs to be able
- to use that encryption context when it processes the bio.
+- We need a way for upper layers like filesystems to specify an encryption
+ context to use for en/decrypting a struct bio, and a device driver (like UFS)
+ needs to be able to use that encryption context when it processes the bio.
-- We need a way for device drivers to expose their capabilities in a unified
- way to the upper layers.
+- We need a way for device drivers to expose their inline encryption
+ capabilities in a unified way to the upper layers.
Design
======
-We add a struct bio_crypt_ctx to struct bio that can represent an
-encryption context, because we need to be able to pass this encryption
-context from the FS layer to the device driver to act upon.
+We add a :c:type:`struct bio_crypt_ctx` to :c:type:`struct bio` that can
+represent an encryption context, because we need to be able to pass this
+encryption context from the upper layers (like the fs layer) to the
+device driver to act upon.
While IE hardware works on the notion of keyslots, the FS layer has no
knowledge of keyslots - it simply wants to specify an encryption context to
@@ -46,7 +63,7 @@
We introduce a keyslot manager (KSM) that handles the translation from
encryption contexts specified by the FS to keyslots on the IE hardware.
-This KSM also serves as the way IE hardware can expose their capabilities to
+This KSM also serves as the way IE hardware can expose its capabilities to
upper layers. The generic mode of operation is: each device driver that wants
to support IE will construct a KSM and set it up in its struct request_queue.
Upper layers that want to use IE on this device can then use this KSM in
@@ -54,13 +71,7 @@
a keyslot. The presence of the KSM in the request queue shall be used to mean
that the device supports IE.
-On the device driver end of the interface, the device driver needs to tell the
-KSM how to actually manipulate the IE hardware in the device to do things like
-programming the crypto key into the IE hardware into a particular keyslot. All
-this is achieved through the :c:type:`struct keyslot_mgmt_ll_ops` that the
-device driver passes to the KSM when creating it.
-
-It uses refcounts to track which keyslots are idle (either they have no
+The KSM uses refcounts to track which keyslots are idle (either they have no
encryption context programmed, or there are no in-flight struct bios
referencing that keyslot). When a new encryption context needs a keyslot, it
tries to find a keyslot that has already been programmed with the same
@@ -70,114 +81,183 @@
is at least one.
-Blk-crypto
-==========
+blk-mq changes, other block layer changes and blk-crypto-fallback
+=================================================================
-The above is sufficient for simple cases, but does not work if there is a
-need for a crypto API fallback, or if we are want to use IE with layered
-devices. To these ends, we introduce blk-crypto. Blk-crypto allows us to
-present a unified view of encryption to the FS (so FS only needs to specify
-an encryption context and not worry about keyslots at all), and blk-crypto
-can decide whether to delegate the en/decryption to IE hardware or to the
-crypto API. Blk-crypto maintains an internal KSM that serves as the crypto
-API fallback.
+We add a pointer to a ``bi_crypt_context`` and ``keyslot`` to
+:c:type:`struct request`. These will be referred to as the ``crypto fields``
+for the request. This ``keyslot`` is the keyslot into which the
+``bi_crypt_context`` has been programmed in the KSM of the ``request_queue``
+that this request is being sent to.
-Blk-crypto needs to ensure that the encryption context is programmed into the
-"correct" keyslot manager for IE. If a bio is submitted to a layered device
-that eventually passes the bio down to a device that really does support IE, we
-want the encryption context to be programmed into a keyslot for the KSM of the
-device with IE support. However, blk-crypto does not know a priori whether a
-particular device is the final device in the layering structure for a bio or
-not. So in the case that a particular device does not support IE, since it is
-possibly the final destination device for the bio, if the bio requires
-encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
-to the crypto API *before* sending the bio to the device.
+We introduce ``block/blk-crypto-fallback.c``, which allows upper layers to remain
+blissfully unaware of whether or not real inline encryption hardware is present
+underneath. When a bio is submitted with a target ``request_queue`` that doesn't
+support the encryption context specified with the bio, the block layer will
+en/decrypt the bio with the blk-crypto-fallback.
-Blk-crypto ensures that:
+If the bio is a ``WRITE`` bio, a bounce bio is allocated, and the data in the bio
+is encrypted stored in the bounce bio - blk-mq will then proceed to process the
+bounce bio as if it were not encrypted at all (except when blk-integrity is
+concerned). ``blk-crypto-fallback`` sets the bounce bio's ``bi_end_io`` to an
+internal function that cleans up the bounce bio and ends the original bio.
-- The bio's encryption context is programmed into a keyslot in the KSM of the
- request queue that the bio is being submitted to (or the crypto API fallback
- KSM if the request queue doesn't have a KSM), and that the ``bc_ksm``
- in the ``bi_crypt_context`` is set to this KSM
+If the bio is a ``READ`` bio, the bio's ``bi_end_io`` (and also ``bi_private``)
+is saved and overwritten by ``blk-crypto-fallback`` to
+``bio_crypto_fallback_decrypt_bio``. The bio's ``bi_crypt_context`` is also
+overwritten with ``NULL``, so that to the rest of the stack, the bio looks
+as if it was a regular bio that never had an encryption context specified.
+``bio_crypto_fallback_decrypt_bio`` will decrypt the bio, restore the original
+``bi_end_io`` (and also ``bi_private``) and end the bio again.
-- That the bio has its own individual reference to the keyslot in this KSM.
- Once the bio passes through blk-crypto, its encryption context is programmed
- in some KSM. The "its own individual reference to the keyslot" ensures that
- keyslots can be released by each bio independently of other bios while
- ensuring that the bio has a valid reference to the keyslot when, for e.g., the
- crypto API fallback KSM in blk-crypto performs crypto on the device's behalf.
- The individual references are ensured by increasing the refcount for the
- keyslot in the ``bc_ksm`` when a bio with a programmed encryption
- context is cloned.
+Regardless of whether real inline encryption hardware is used or the
+blk-crypto-fallback is used, the ciphertext written to disk (and hence the
+on-disk format of data) will be the same (assuming the hardware's implementation
+of the algorithm being used adheres to spec and functions correctly).
+
+If a ``request queue``'s inline encryption hardware claimed to support the
+encryption context specified with a bio, then it will not be handled by the
+``blk-crypto-fallback``. We will eventually reach a point in blk-mq when a
+:c:type:`struct request` needs to be allocated for that bio. At that point,
+blk-mq tries to program the encryption context into the ``request_queue``'s
+keyslot_manager, and obtain a keyslot, which it stores in its newly added
+``keyslot`` field. This keyslot is released when the request is completed.
+
+When the first bio is added to a request, ``blk_crypto_rq_bio_prep`` is called,
+which sets the request's ``crypt_ctx`` to a copy of the bio's
+``bi_crypt_context``. bio_crypt_do_front_merge is called whenever a subsequent
+bio is merged to the front of the request, which updates the ``crypt_ctx`` of
+the request so that it matches the newly merged bio's ``bi_crypt_context``. In particular, the request keeps a copy of the ``bi_crypt_context`` of the first
+bio in its bio-list (blk-mq needs to be careful to maintain this invariant
+during bio and request merges).
+
+To make it possible for inline encryption to work with request queue based
+layered devices, when a request is cloned, its ``crypto fields`` are cloned as
+well. When the cloned request is submitted, blk-mq programs the
+``bi_crypt_context`` of the request into the clone's request_queue's keyslot
+manager, and stores the returned keyslot in the clone's ``keyslot``.
-What blk-crypto does on bio submission
---------------------------------------
+API presented to users of the block layer
+=========================================
-**Case 1:** blk-crypto is given a bio with only an encryption context that hasn't
-been programmed into any keyslot in any KSM (for e.g. a bio from the FS).
- In this case, blk-crypto will program the encryption context into the KSM of the
- request queue the bio is being submitted to (and if this KSM does not exist,
- then it will program it into blk-crypto's internal KSM for crypto API
- fallback). The KSM that this encryption context was programmed into is stored
- as the ``bc_ksm`` in the bio's ``bi_crypt_context``.
+``struct blk_crypto_key`` represents a crypto key (the raw key, size of the
+key, the crypto algorithm to use, the data unit size to use, and the number of
+bytes required to represent data unit numbers that will be specified with the
+``bi_crypt_context``).
-**Case 2:** blk-crypto is given a bio whose encryption context has already been
-programmed into a keyslot in the *crypto API fallback* KSM.
- In this case, blk-crypto does nothing; it treats the bio as not having
- specified an encryption context. Note that we cannot do here what we will do
- in Case 3 because we would have already encrypted the bio via the crypto API
- by this point.
+``blk_crypto_init_key`` allows upper layers to initialize such a
+``blk_crypto_key``.
-**Case 3:** blk-crypto is given a bio whose encryption context has already been
-programmed into a keyslot in some KSM (that is *not* the crypto API fallback
-KSM).
- In this case, blk-crypto first releases that keyslot from that KSM and then
- treats the bio as in Case 1.
+``bio_crypt_set_ctx`` should be called on any bio that a user of
+the block layer wants en/decrypted via inline encryption (or the
+blk-crypto-fallback, if hardware support isn't available for the desired
+crypto configuration). This function takes the ``blk_crypto_key`` and the
+data unit number (DUN) to use when en/decrypting the bio.
-This way, when a device driver is processing a bio, it can be sure that
-the bio's encryption context has been programmed into some KSM (either the
-device driver's request queue's KSM, or blk-crypto's crypto API fallback KSM).
-It then simply needs to check if the bio's ``bc_ksm`` is the device's
-request queue's KSM. If so, then it should proceed with IE. If not, it should
-simply do nothing with respect to crypto, because some other KSM (perhaps the
-blk-crypto crypto API fallback KSM) is handling the en/decryption.
+``blk_crypto_config_supported`` allows upper layers to query whether or not the
+an encryption context passed to request queue can be handled by blk-crypto
+(either by real inline encryption hardware, or by the blk-crypto-fallback).
+This is useful e.g. when blk-crypto-fallback is disabled, and the upper layer
+wants to use an algorithm that may not supported by hardware - this function
+lets the upper layer know ahead of time that the algorithm isn't supported,
+and the upper layer can fallback to something else if appropriate.
-Blk-crypto will release the keyslot that is being held by the bio (and also
-decrypt it if the bio is using the crypto API fallback KSM) once
-``bio_remaining_done`` returns true for the bio.
+``blk_crypto_start_using_key`` - Upper layers must call this function on
+``blk_crypto_key`` and a ``request_queue`` before using the key with any bio
+headed for that ``request_queue``. This function ensures that either the
+hardware supports the key's crypto settings, or the crypto API fallback has
+transforms for the needed mode allocated and ready to go. Note that this
+function may allocate an ``skcipher``, and must not be called from the data
+path, since allocating ``skciphers`` from the data path can deadlock.
+
+``blk_crypto_evict_key`` *must* be called by upper layers before a
+``blk_crypto_key`` is freed. Further, it *must* only be called only once
+there are no more in-flight requests that use that ``blk_crypto_key``.
+``blk_crypto_evict_key`` will ensure that a key is removed from any keyslots in
+inline encryption hardware that the key might have been programmed into (or the blk-crypto-fallback).
+
+API presented to device drivers
+===============================
+
+A :c:type:``struct blk_keyslot_manager`` should be set up by device drivers in
+the ``request_queue`` of the device. The device driver needs to call
+``blk_ksm_init`` on the ``blk_keyslot_manager``, which specifying the number of
+keyslots supported by the hardware.
+
+The device driver also needs to tell the KSM how to actually manipulate the
+IE hardware in the device to do things like programming the crypto key into
+the IE hardware into a particular keyslot. All this is achieved through the
+:c:type:`struct blk_ksm_ll_ops` field in the KSM that the device driver
+must fill up after initing the ``blk_keyslot_manager``.
+
+The KSM also handles runtime power management for the device when applicable
+(e.g. when it wants to program a crypto key into the IE hardware, the device
+must be runtime powered on) - so the device driver must also set the ``dev``
+field in the ksm to point to the `struct device` for the KSM to use for runtime
+power management.
+
+``blk_ksm_reprogram_all_keys`` can be called by device drivers if the device
+needs each and every of its keyslots to be reprogrammed with the key it
+"should have" at the point in time when the function is called. This is useful
+e.g. if a device loses all its keys on runtime power down/up.
+
+``blk_ksm_destroy`` should be called to free up all resources used by a keyslot
+manager upon ``blk_ksm_init``, once the ``blk_keyslot_manager`` is no longer
+needed.
Layered Devices
===============
-Layered devices that wish to support IE need to create their own keyslot
-manager for their request queue, and expose whatever functionality they choose.
-When a layered device wants to pass a bio to another layer (either by
-resubmitting the same bio, or by submitting a clone), it doesn't need to do
-anything special because the bio (or the clone) will once again pass through
-blk-crypto, which will work as described in Case 3. If a layered device wants
-for some reason to do the IO by itself instead of passing it on to a child
-device, but it also chose to expose IE capabilities by setting up a KSM in its
-request queue, it is then responsible for en/decrypting the data itself. In
-such cases, the device can choose to call the blk-crypto function
-``blk_crypto_fallback_to_kernel_crypto_api`` (TODO: Not yet implemented), which will
-cause the en/decryption to be done via the crypto API fallback.
+Request queue based layered devices like dm-rq that wish to support IE need to
+create their own keyslot manager for their request queue, and expose whatever
+functionality they choose. When a layered device wants to pass a clone of that
+request to another ``request_queue``, blk-crypto will initialize and prepare the
+clone as necessary - see ``blk_crypto_insert_cloned_request`` in
+``blk-crypto.c``.
Future Optimizations for layered devices
========================================
-Creating a keyslot manager for the layered device uses up memory for each
-keyslot, and in general, a layered device (like dm-linear) merely passes the
-request on to a "child" device, so the keyslots in the layered device itself
-might be completely unused. We can instead define a new type of KSM; the
-"passthrough KSM", that layered devices can use to let blk-crypto know that
-this layered device *will* pass the bio to some child device (and hence
-through blk-crypto again, at which point blk-crypto can program the encryption
-context, instead of programming it into the layered device's KSM). Again, if
-the device "lies" and decides to do the IO itself instead of passing it on to
-a child device, it is responsible for doing the en/decryption (and can choose
-to call ``blk_crypto_fallback_to_kernel_crypto_api``). Another use case for the
-"passthrough KSM" is for IE devices that want to manage their own keyslots/do
-not have a limited number of keyslots.
+Creating a keyslot manager for a layered device uses up memory for each
+keyslot, and in general, a layered device merely passes the request on to a
+"child" device, so the keyslots in the layered device itself are completely
+unused, and don't need any refcounting or keyslot programming. We can instead
+define a new type of KSM; the "passthrough KSM", that layered devices can use
+to advertise an unlimited number of keyslots, and support for any encryption
+algorithms they choose, while not actually using any memory for each keyslot.
+Another use case for the "passthrough KSM" is for IE devices that do not have a
+limited number of keyslots.
+
+
+Interaction between inline encryption and blk integrity
+=======================================================
+
+At the time of this patch, there is no real hardware that supports both these
+features. However, these features do interact with each other, and it's not
+completely trivial to make them both work together properly. In particular,
+when a WRITE bio wants to use inline encryption on a device that supports both
+features, the bio will have an encryption context specified, after which
+its integrity information is calculated (using the plaintext data, since
+the encryption will happen while data is being written), and the data and
+integrity info is sent to the device. Obviously, the integrity info must be
+verified before the data is encrypted. After the data is encrypted, the device
+must not store the integrity info that it received with the plaintext data
+since that might reveal information about the plaintext data. As such, it must
+re-generate the integrity info from the ciphertext data and store that on disk
+instead. Another issue with storing the integrity info of the plaintext data is
+that it changes the on disk format depending on whether hardware inline
+encryption support is present or the kernel crypto API fallback is used (since
+if the fallback is used, the device will receive the integrity info of the
+ciphertext, not that of the plaintext).
+
+Because there isn't any real hardware yet, it seems prudent to assume that
+hardware implementations might not implement both features together correctly,
+and disallow the combination for now. Whenever a device supports integrity, the
+kernel will pretend that the device does not support hardware inline encryption
+(by essentially setting the keyslot manager in the request_queue of the device
+to NULL). When the crypto API fallback is enabled, this means that all bios with
+and encryption context will use the fallback, and IO will complete as usual.
+When the fallback is disabled, a bio with an encryption context will be failed.
diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
index 4218ac6..494ba15 100644
--- a/Documentation/filesystems/f2fs.rst
+++ b/Documentation/filesystems/f2fs.rst
@@ -258,7 +258,12 @@
on compression extension list and enable compression on
these file by default rather than to enable it via ioctl.
For other files, we can still enable compression via ioctl.
-====================== ============================================================
+inlinecrypt
+ Encrypt/decrypt the contents of encrypted files using the
+ blk-crypto framework rather than filesystem-layer encryption.
+ This allows the use of inline encryption hardware. The on-disk
+ format is unaffected. For more details, see
+ Documentation/block/inline-encryption.rst.
Debugfs Entries
===============
diff --git a/block/Makefile b/block/Makefile
index 07d13e5..7871916 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -36,6 +36,5 @@
obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o
obj-$(CONFIG_BLK_PM) += blk-pm.o
-obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o \
- blk-crypto.o
-obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) += blk-crypto-fallback.o
\ No newline at end of file
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o blk-crypto.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) += blk-crypto-fallback.o
diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c
deleted file mode 100644
index 75008b2..0000000
--- a/block/bio-crypt-ctx.c
+++ /dev/null
@@ -1,142 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright 2019 Google LLC
- */
-
-#include <linux/bio.h>
-#include <linux/blkdev.h>
-#include <linux/keyslot-manager.h>
-#include <linux/module.h>
-#include <linux/slab.h>
-
-#include "blk-crypto-internal.h"
-
-static int num_prealloc_crypt_ctxs = 128;
-
-module_param(num_prealloc_crypt_ctxs, int, 0444);
-MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
- "Number of bio crypto contexts to preallocate");
-
-static struct kmem_cache *bio_crypt_ctx_cache;
-static mempool_t *bio_crypt_ctx_pool;
-
-int __init bio_crypt_ctx_init(void)
-{
- size_t i;
-
- bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0);
- if (!bio_crypt_ctx_cache)
- return -ENOMEM;
-
- bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs,
- bio_crypt_ctx_cache);
- if (!bio_crypt_ctx_pool)
- return -ENOMEM;
-
- /* This is assumed in various places. */
- BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0);
-
- /* Sanity check that no algorithm exceeds the defined limits. */
- for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
- BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE);
- BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE);
- }
-
- return 0;
-}
-
-struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
-{
- return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
-}
-EXPORT_SYMBOL_GPL(bio_crypt_alloc_ctx);
-
-void bio_crypt_free_ctx(struct bio *bio)
-{
- mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
- bio->bi_crypt_context = NULL;
-}
-
-void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask)
-{
- const struct bio_crypt_ctx *src_bc = src->bi_crypt_context;
-
- bio_clone_skip_dm_default_key(dst, src);
-
- /*
- * If a bio is fallback_crypted, then it will be decrypted when
- * bio_endio is called. As we only want the data to be decrypted once,
- * copies of the bio must not have have a crypt context.
- */
- if (!src_bc || bio_crypt_fallback_crypted(src_bc))
- return;
-
- dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
- *dst->bi_crypt_context = *src_bc;
-
- if (src_bc->bc_keyslot >= 0)
- keyslot_manager_get_slot(src_bc->bc_ksm, src_bc->bc_keyslot);
-}
-EXPORT_SYMBOL_GPL(bio_crypt_clone);
-
-bool bio_crypt_should_process(struct request *rq)
-{
- struct bio *bio = rq->bio;
-
- if (!bio || !bio->bi_crypt_context)
- return false;
-
- return rq->q->ksm == bio->bi_crypt_context->bc_ksm;
-}
-EXPORT_SYMBOL_GPL(bio_crypt_should_process);
-
-/*
- * Checks that two bio crypt contexts are compatible - i.e. that
- * they are mergeable except for data_unit_num continuity.
- */
-bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
-{
- struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
- struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
-
- if (!bc1)
- return !bc2;
- return bc2 && bc1->bc_key == bc2->bc_key;
-}
-
-/*
- * Checks that two bio crypt contexts are compatible, and also
- * that their data_unit_nums are continuous (and can hence be merged)
- * in the order b_1 followed by b_2.
- */
-bool bio_crypt_ctx_mergeable(struct bio *b_1, unsigned int b1_bytes,
- struct bio *b_2)
-{
- struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
- struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
-
- if (!bio_crypt_ctx_compatible(b_1, b_2))
- return false;
-
- return !bc1 || bio_crypt_dun_is_contiguous(bc1, b1_bytes, bc2->bc_dun);
-}
-
-void bio_crypt_ctx_release_keyslot(struct bio_crypt_ctx *bc)
-{
- keyslot_manager_put_slot(bc->bc_ksm, bc->bc_keyslot);
- bc->bc_ksm = NULL;
- bc->bc_keyslot = -1;
-}
-
-int bio_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
- struct keyslot_manager *ksm)
-{
- int slot = keyslot_manager_get_slot_for_key(ksm, bc->bc_key);
-
- if (slot < 0)
- return slot;
-
- bc->bc_keyslot = slot;
- bc->bc_ksm = ksm;
- return 0;
-}
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index bf62c25..3579ac0 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -42,6 +42,9 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
struct bio_set *bs = bio->bi_pool;
unsigned inline_vecs;
+ if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
+ return ERR_PTR(-EOPNOTSUPP);
+
if (!bs || !mempool_initialized(&bs->bio_integrity_pool)) {
bip = kmalloc(struct_size(bip, bip_inline_vecs, nr_vecs), gfp_mask);
inline_vecs = nr_vecs;
diff --git a/block/bio.c b/block/bio.c
index f37d8c4..960303d 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -713,10 +713,15 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
bio_crypt_clone(b, bio, gfp_mask);
- if (bio_integrity(bio) &&
- bio_integrity_clone(b, bio, gfp_mask) < 0) {
- bio_put(b);
- return NULL;
+ if (bio_integrity(bio)) {
+ int ret;
+
+ ret = bio_integrity_clone(b, bio, gfp_mask);
+
+ if (ret < 0) {
+ bio_put(b);
+ return NULL;
+ }
}
return b;
@@ -1391,10 +1396,6 @@ void bio_endio(struct bio *bio)
again:
if (!bio_remaining_done(bio))
return;
-
- if (!blk_crypto_endio(bio))
- return;
-
if (!bio_integrity_endio(bio))
return;
diff --git a/block/blk-core.c b/block/blk-core.c
index 4a8b6be..8391b8ea 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -122,6 +122,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
rq->start_time_ns = ktime_get_ns();
rq->part = NULL;
refcount_set(&rq->ref, 1);
+ blk_crypto_rq_set_defaults(rq);
}
EXPORT_SYMBOL(blk_rq_init);
@@ -625,6 +626,8 @@ bool bio_attempt_back_merge(struct request *req, struct bio *bio,
req->biotail = bio;
req->__data_len += bio->bi_iter.bi_size;
+ bio_crypt_free_ctx(bio);
+
blk_account_io_start(req, false);
return true;
}
@@ -649,6 +652,8 @@ bool bio_attempt_front_merge(struct request *req, struct bio *bio,
req->__sector = bio->bi_iter.bi_sector;
req->__data_len += bio->bi_iter.bi_size;
+ bio_crypt_do_front_merge(req, bio);
+
blk_account_io_start(req, false);
return true;
}
@@ -1071,8 +1076,7 @@ blk_qc_t generic_make_request(struct bio *bio)
/* Create a fresh bio_list for all subordinate requests */
bio_list_on_stack[1] = bio_list_on_stack[0];
bio_list_init(&bio_list_on_stack[0]);
-
- if (!blk_crypto_submit_bio(&bio))
+ if (blk_crypto_bio_prep(&bio))
ret = q->make_request_fn(q, bio);
blk_queue_exit(q);
@@ -1133,8 +1137,7 @@ blk_qc_t direct_make_request(struct bio *bio)
bio_io_error(bio);
return BLK_QC_T_NONE;
}
-
- if (!blk_crypto_submit_bio(&bio))
+ if (blk_crypto_bio_prep(&bio))
ret = q->make_request_fn(q, bio);
blk_queue_exit(q);
return ret;
@@ -1265,6 +1268,9 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq)))
return BLK_STS_IOERR;
+ if (blk_crypto_insert_cloned_request(rq))
+ return BLK_STS_IOERR;
+
if (blk_queue_io_stat(q))
blk_account_io_start(rq, true);
@@ -1642,6 +1648,9 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src,
rq->ioprio = rq_src->ioprio;
rq->extra_len = rq_src->extra_len;
+ if (rq->bio)
+ blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask);
+
return 0;
free_and_out:
@@ -1803,11 +1812,5 @@ int __init blk_dev_init(void)
blk_debugfs_root = debugfs_create_dir("block", NULL);
#endif
- if (bio_crypt_ctx_init() < 0)
- panic("Failed to allocate mem for bio crypt ctxs\n");
-
- if (blk_crypto_fallback_init() < 0)
- panic("Failed to init blk-crypto-fallback\n");
-
return 0;
}
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index d6c5788..d1f6d31 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -12,6 +12,7 @@
#include <crypto/skcipher.h>
#include <linux/blk-cgroup.h>
#include <linux/blk-crypto.h>
+#include <linux/blkdev.h>
#include <linux/crypto.h>
#include <linux/keyslot-manager.h>
#include <linux/mempool.h>
@@ -44,10 +45,18 @@ struct bio_fallback_crypt_ctx {
* resubmitted
*/
struct bvec_iter crypt_iter;
- u64 fallback_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+ union {
+ struct {
+ struct work_struct work;
+ struct bio *bio;
+ };
+ struct {
+ void *bi_private_orig;
+ bio_end_io_t *bi_end_io_orig;
+ };
+ };
};
-/* The following few vars are only used during the crypto API fallback */
static struct kmem_cache *bio_fallback_crypt_ctx_cache;
static mempool_t *bio_fallback_crypt_ctx_pool;
@@ -63,27 +72,14 @@ static mempool_t *bio_fallback_crypt_ctx_pool;
static DEFINE_MUTEX(tfms_init_lock);
static bool tfms_inited[BLK_ENCRYPTION_MODE_MAX];
-struct blk_crypto_decrypt_work {
- struct work_struct work;
- struct bio *bio;
-};
-
static struct blk_crypto_keyslot {
- struct crypto_skcipher *tfm;
enum blk_crypto_mode_num crypto_mode;
struct crypto_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX];
} *blk_crypto_keyslots;
-/* The following few vars are only used during the crypto API fallback */
-static struct keyslot_manager *blk_crypto_ksm;
+static struct blk_keyslot_manager blk_crypto_ksm;
static struct workqueue_struct *blk_crypto_wq;
static mempool_t *blk_crypto_bounce_page_pool;
-static struct kmem_cache *blk_crypto_decrypt_work_cache;
-
-bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc)
-{
- return bc && bc->bc_ksm == blk_crypto_ksm;
-}
/*
* This is the key we set when evicting a keyslot. This *should* be the all 0's
@@ -106,21 +102,19 @@ static void blk_crypto_evict_keyslot(unsigned int slot)
slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID;
}
-static int blk_crypto_keyslot_program(struct keyslot_manager *ksm,
+static int blk_crypto_keyslot_program(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key,
unsigned int slot)
{
struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
- const enum blk_crypto_mode_num crypto_mode = key->crypto_mode;
+ const enum blk_crypto_mode_num crypto_mode =
+ key->crypto_cfg.crypto_mode;
int err;
if (crypto_mode != slotp->crypto_mode &&
- slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID) {
+ slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
blk_crypto_evict_keyslot(slot);
- }
- if (!slotp->tfms[crypto_mode])
- return -ENOMEM;
slotp->crypto_mode = crypto_mode;
err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw,
key->size);
@@ -131,7 +125,7 @@ static int blk_crypto_keyslot_program(struct keyslot_manager *ksm,
return 0;
}
-static int blk_crypto_keyslot_evict(struct keyslot_manager *ksm,
+static int blk_crypto_keyslot_evict(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key,
unsigned int slot)
{
@@ -141,16 +135,15 @@ static int blk_crypto_keyslot_evict(struct keyslot_manager *ksm,
/*
* The crypto API fallback KSM ops - only used for a bio when it specifies a
- * blk_crypto_mode for which we failed to get a keyslot in the device's inline
- * encryption hardware (which probably means the device doesn't have inline
- * encryption hardware that supports that crypto mode).
+ * blk_crypto_key that was not supported by the device's inline encryption
+ * hardware.
*/
-static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
+static const struct blk_ksm_ll_ops blk_crypto_ksm_ll_ops = {
.keyslot_program = blk_crypto_keyslot_program,
.keyslot_evict = blk_crypto_keyslot_evict,
};
-static void blk_crypto_encrypt_endio(struct bio *enc_bio)
+static void blk_crypto_fallback_encrypt_endio(struct bio *enc_bio)
{
struct bio *src_bio = enc_bio->bi_private;
int i;
@@ -184,12 +177,6 @@ static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
bio_for_each_segment(bv, bio_src, iter)
bio->bi_io_vec[bio->bi_vcnt++] = bv;
- if (bio_integrity(bio_src) &&
- bio_integrity_clone(bio, bio_src, GFP_NOIO) < 0) {
- bio_put(bio);
- return NULL;
- }
-
bio_clone_blkg_association(bio, bio_src);
blkcg_bio_issue_init(bio);
@@ -198,30 +185,30 @@ static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
return bio;
}
-static int blk_crypto_alloc_cipher_req(struct bio *src_bio,
- struct skcipher_request **ciph_req_ret,
- struct crypto_wait *wait)
+static bool blk_crypto_alloc_cipher_req(struct blk_ksm_keyslot *slot,
+ struct skcipher_request **ciph_req_ret,
+ struct crypto_wait *wait)
{
struct skcipher_request *ciph_req;
const struct blk_crypto_keyslot *slotp;
+ int keyslot_idx = blk_ksm_get_slot_idx(slot);
- slotp = &blk_crypto_keyslots[src_bio->bi_crypt_context->bc_keyslot];
+ slotp = &blk_crypto_keyslots[keyslot_idx];
ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
GFP_NOIO);
- if (!ciph_req) {
- src_bio->bi_status = BLK_STS_RESOURCE;
- return -ENOMEM;
- }
+ if (!ciph_req)
+ return false;
skcipher_request_set_callback(ciph_req,
CRYPTO_TFM_REQ_MAY_BACKLOG |
CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, wait);
*ciph_req_ret = ciph_req;
- return 0;
+
+ return true;
}
-static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
+static bool blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
{
struct bio *bio = *bio_ptr;
unsigned int i = 0;
@@ -240,13 +227,14 @@ static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL);
if (!split_bio) {
bio->bi_status = BLK_STS_RESOURCE;
- return -ENOMEM;
+ return false;
}
bio_chain(split_bio, bio);
generic_make_request(bio);
*bio_ptr = split_bio;
}
- return 0;
+
+ return true;
}
union blk_crypto_iv {
@@ -267,52 +255,54 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
* The crypto API fallback's encryption routine.
* Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
* and replace *bio_ptr with the bounce bio. May split input bio if it's too
- * large.
+ * large. Returns true on success. Returns false and sets bio->bi_status on
+ * error.
*/
-static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
+static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
{
- struct bio *src_bio;
+ struct bio *src_bio, *enc_bio;
+ struct bio_crypt_ctx *bc;
+ struct blk_ksm_keyslot *slot;
+ int data_unit_size;
struct skcipher_request *ciph_req = NULL;
DECLARE_CRYPTO_WAIT(wait);
u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
- union blk_crypto_iv iv;
struct scatterlist src, dst;
- struct bio *enc_bio;
+ union blk_crypto_iv iv;
unsigned int i, j;
- int data_unit_size;
- struct bio_crypt_ctx *bc;
- int err = 0;
+ bool ret = false;
+ blk_status_t blk_st;
/* Split the bio if it's too big for single page bvec */
- err = blk_crypto_split_bio_if_needed(bio_ptr);
- if (err)
- return err;
+ if (!blk_crypto_split_bio_if_needed(bio_ptr))
+ return false;
src_bio = *bio_ptr;
bc = src_bio->bi_crypt_context;
- data_unit_size = bc->bc_key->data_unit_size;
+ data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
/* Allocate bounce bio for encryption */
enc_bio = blk_crypto_clone_bio(src_bio);
if (!enc_bio) {
src_bio->bi_status = BLK_STS_RESOURCE;
- return -ENOMEM;
+ return false;
}
/*
* Use the crypto API fallback keyslot manager to get a crypto_skcipher
* for the algorithm and key specified for this bio.
*/
- err = bio_crypt_ctx_acquire_keyslot(bc, blk_crypto_ksm);
- if (err) {
- src_bio->bi_status = BLK_STS_IOERR;
+ blk_st = blk_ksm_get_slot_for_key(&blk_crypto_ksm, bc->bc_key, &slot);
+ if (blk_st != BLK_STS_OK) {
+ src_bio->bi_status = blk_st;
goto out_put_enc_bio;
}
/* and then allocate an skcipher_request for it */
- err = blk_crypto_alloc_cipher_req(src_bio, &ciph_req, &wait);
- if (err)
+ if (!blk_crypto_alloc_cipher_req(slot, &ciph_req, &wait)) {
+ src_bio->bi_status = BLK_STS_RESOURCE;
goto out_release_keyslot;
+ }
memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
sg_init_table(&src, 1);
@@ -332,7 +322,6 @@ static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
if (!ciphertext_page) {
src_bio->bi_status = BLK_STS_RESOURCE;
- err = -ENOMEM;
goto out_free_bounce_pages;
}
@@ -344,11 +333,10 @@ static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
/* Encrypt each data unit in this page */
for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) {
blk_crypto_dun_to_iv(curr_dun, &iv);
- err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
- &wait);
- if (err) {
+ if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
+ &wait)) {
i++;
- src_bio->bi_status = BLK_STS_RESOURCE;
+ src_bio->bi_status = BLK_STS_IOERR;
goto out_free_bounce_pages;
}
bio_crypt_dun_increment(curr_dun, 1);
@@ -358,11 +346,11 @@ static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
}
enc_bio->bi_private = src_bio;
- enc_bio->bi_end_io = blk_crypto_encrypt_endio;
+ enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
*bio_ptr = enc_bio;
+ ret = true;
enc_bio = NULL;
- err = 0;
goto out_free_ciph_req;
out_free_bounce_pages:
@@ -372,61 +360,53 @@ static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
out_free_ciph_req:
skcipher_request_free(ciph_req);
out_release_keyslot:
- bio_crypt_ctx_release_keyslot(bc);
+ blk_ksm_put_slot(slot);
out_put_enc_bio:
if (enc_bio)
bio_put(enc_bio);
- return err;
-}
-
-static void blk_crypto_free_fallback_crypt_ctx(struct bio *bio)
-{
- mempool_free(container_of(bio->bi_crypt_context,
- struct bio_fallback_crypt_ctx,
- crypt_ctx),
- bio_fallback_crypt_ctx_pool);
- bio->bi_crypt_context = NULL;
+ return ret;
}
/*
* The crypto API fallback's main decryption routine.
- * Decrypts input bio in place.
+ * Decrypts input bio in place, and calls bio_endio on the bio.
*/
-static void blk_crypto_decrypt_bio(struct work_struct *work)
+static void blk_crypto_fallback_decrypt_bio(struct work_struct *work)
{
- struct blk_crypto_decrypt_work *decrypt_work =
- container_of(work, struct blk_crypto_decrypt_work, work);
- struct bio *bio = decrypt_work->bio;
+ struct bio_fallback_crypt_ctx *f_ctx =
+ container_of(work, struct bio_fallback_crypt_ctx, work);
+ struct bio *bio = f_ctx->bio;
+ struct bio_crypt_ctx *bc = &f_ctx->crypt_ctx;
+ struct blk_ksm_keyslot *slot;
struct skcipher_request *ciph_req = NULL;
DECLARE_CRYPTO_WAIT(wait);
- struct bio_vec bv;
- struct bvec_iter iter;
u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
union blk_crypto_iv iv;
struct scatterlist sg;
- struct bio_crypt_ctx *bc = bio->bi_crypt_context;
- struct bio_fallback_crypt_ctx *f_ctx =
- container_of(bc, struct bio_fallback_crypt_ctx, crypt_ctx);
- const int data_unit_size = bc->bc_key->data_unit_size;
+ struct bio_vec bv;
+ struct bvec_iter iter;
+ const int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
unsigned int i;
- int err;
+ blk_status_t blk_st;
/*
* Use the crypto API fallback keyslot manager to get a crypto_skcipher
* for the algorithm and key specified for this bio.
*/
- if (bio_crypt_ctx_acquire_keyslot(bc, blk_crypto_ksm)) {
- bio->bi_status = BLK_STS_RESOURCE;
+ blk_st = blk_ksm_get_slot_for_key(&blk_crypto_ksm, bc->bc_key, &slot);
+ if (blk_st != BLK_STS_OK) {
+ bio->bi_status = blk_st;
goto out_no_keyslot;
}
/* and then allocate an skcipher_request for it */
- err = blk_crypto_alloc_cipher_req(bio, &ciph_req, &wait);
- if (err)
+ if (!blk_crypto_alloc_cipher_req(slot, &ciph_req, &wait)) {
+ bio->bi_status = BLK_STS_RESOURCE;
goto out;
+ }
- memcpy(curr_dun, f_ctx->fallback_dun, sizeof(curr_dun));
+ memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
sg_init_table(&sg, 1);
skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size,
iv.bytes);
@@ -452,40 +432,174 @@ static void blk_crypto_decrypt_bio(struct work_struct *work)
out:
skcipher_request_free(ciph_req);
- bio_crypt_ctx_release_keyslot(bc);
+ blk_ksm_put_slot(slot);
out_no_keyslot:
- kmem_cache_free(blk_crypto_decrypt_work_cache, decrypt_work);
- blk_crypto_free_fallback_crypt_ctx(bio);
+ mempool_free(f_ctx, bio_fallback_crypt_ctx_pool);
bio_endio(bio);
}
-/*
- * Queue bio for decryption.
- * Returns true iff bio was queued for decryption.
+/**
+ * blk_crypto_fallback_decrypt_endio - queue bio for fallback decryption
+ *
+ * @bio: the bio to queue
+ *
+ * Restore bi_private and bi_end_io, and queue the bio for decryption into a
+ * workqueue, since this function will be called from an atomic context.
*/
-bool blk_crypto_queue_decrypt_bio(struct bio *bio)
+static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
{
- struct blk_crypto_decrypt_work *decrypt_work;
+ struct bio_fallback_crypt_ctx *f_ctx = bio->bi_private;
+
+ bio->bi_private = f_ctx->bi_private_orig;
+ bio->bi_end_io = f_ctx->bi_end_io_orig;
/* If there was an IO error, don't queue for decrypt. */
- if (bio->bi_status)
- goto out;
-
- decrypt_work = kmem_cache_zalloc(blk_crypto_decrypt_work_cache,
- GFP_ATOMIC);
- if (!decrypt_work) {
- bio->bi_status = BLK_STS_RESOURCE;
- goto out;
+ if (bio->bi_status) {
+ mempool_free(f_ctx, bio_fallback_crypt_ctx_pool);
+ bio_endio(bio);
+ return;
}
- INIT_WORK(&decrypt_work->work, blk_crypto_decrypt_bio);
- decrypt_work->bio = bio;
- queue_work(blk_crypto_wq, &decrypt_work->work);
+ INIT_WORK(&f_ctx->work, blk_crypto_fallback_decrypt_bio);
+ f_ctx->bio = bio;
+ queue_work(blk_crypto_wq, &f_ctx->work);
+}
+
+/**
+ * blk_crypto_fallback_bio_prep - Prepare a bio to use fallback en/decryption
+ *
+ * @bio_ptr: pointer to the bio to prepare
+ *
+ * If bio is doing a WRITE operation, this splits the bio into two parts if it's
+ * too big (see blk_crypto_split_bio_if_needed). It then allocates a bounce bio
+ * for the first part, encrypts it, and update bio_ptr to point to the bounce
+ * bio.
+ *
+ * For a READ operation, we mark the bio for decryption by using bi_private and
+ * bi_end_io.
+ *
+ * In either case, this function will make the bio look like a regular bio (i.e.
+ * as if no encryption context was ever specified) for the purposes of the rest
+ * of the stack except for blk-integrity (blk-integrity and blk-crypto are not
+ * currently supported together).
+ *
+ * Return: true on success. Sets bio->bi_status and returns false on error.
+ */
+bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+{
+ struct bio *bio = *bio_ptr;
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+ struct bio_fallback_crypt_ctx *f_ctx;
+
+ if (bc->bc_key->crypto_cfg.is_hw_wrapped) {
+ pr_warn_once("HW wrapped key cannot be used with fallback.\n");
+ bio->bi_status = BLK_STS_NOTSUPP;
+ return false;
+ }
+
+ if (WARN_ON_ONCE(!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode])) {
+ /* User didn't call blk_crypto_start_using_key() first */
+ bio->bi_status = BLK_STS_IOERR;
+ return false;
+ }
+
+ if (!blk_ksm_crypto_cfg_supported(&blk_crypto_ksm,
+ &bc->bc_key->crypto_cfg)) {
+ bio->bi_status = BLK_STS_NOTSUPP;
+ return false;
+ }
+
+ if (bio_data_dir(bio) == WRITE)
+ return blk_crypto_fallback_encrypt_bio(bio_ptr);
+
+ /*
+ * bio READ case: Set up a f_ctx in the bio's bi_private and set the
+ * bi_end_io appropriately to trigger decryption when the bio is ended.
+ */
+ f_ctx = mempool_alloc(bio_fallback_crypt_ctx_pool, GFP_NOIO);
+ f_ctx->crypt_ctx = *bc;
+ f_ctx->crypt_iter = bio->bi_iter;
+ f_ctx->bi_private_orig = bio->bi_private;
+ f_ctx->bi_end_io_orig = bio->bi_end_io;
+ bio->bi_private = (void *)f_ctx;
+ bio->bi_end_io = blk_crypto_fallback_decrypt_endio;
+ bio_crypt_free_ctx(bio);
return true;
+}
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+ return blk_ksm_evict_key(&blk_crypto_ksm, key);
+}
+
+static bool blk_crypto_fallback_inited;
+static int blk_crypto_fallback_init(void)
+{
+ int i;
+ int err = -ENOMEM;
+
+ if (blk_crypto_fallback_inited)
+ return 0;
+
+ prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+
+ err = blk_ksm_init(&blk_crypto_ksm, blk_crypto_num_keyslots);
+ if (err)
+ goto out;
+ err = -ENOMEM;
+
+ blk_crypto_ksm.ksm_ll_ops = blk_crypto_ksm_ll_ops;
+ blk_crypto_ksm.max_dun_bytes_supported = BLK_CRYPTO_MAX_IV_SIZE;
+ blk_crypto_ksm.features = BLK_CRYPTO_FEATURE_STANDARD_KEYS;
+
+ /* All blk-crypto modes have a crypto API fallback. */
+ for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++)
+ blk_crypto_ksm.crypto_modes_supported[i] = 0xFFFFFFFF;
+ blk_crypto_ksm.crypto_modes_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
+
+ blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
+ WQ_UNBOUND | WQ_HIGHPRI |
+ WQ_MEM_RECLAIM, num_online_cpus());
+ if (!blk_crypto_wq)
+ goto fail_free_ksm;
+
+ blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots,
+ sizeof(blk_crypto_keyslots[0]),
+ GFP_KERNEL);
+ if (!blk_crypto_keyslots)
+ goto fail_free_wq;
+
+ blk_crypto_bounce_page_pool =
+ mempool_create_page_pool(num_prealloc_bounce_pg, 0);
+ if (!blk_crypto_bounce_page_pool)
+ goto fail_free_keyslots;
+
+ bio_fallback_crypt_ctx_cache = KMEM_CACHE(bio_fallback_crypt_ctx, 0);
+ if (!bio_fallback_crypt_ctx_cache)
+ goto fail_free_bounce_page_pool;
+
+ bio_fallback_crypt_ctx_pool =
+ mempool_create_slab_pool(num_prealloc_fallback_crypt_ctxs,
+ bio_fallback_crypt_ctx_cache);
+ if (!bio_fallback_crypt_ctx_pool)
+ goto fail_free_crypt_ctx_cache;
+
+ blk_crypto_fallback_inited = true;
+
+ return 0;
+fail_free_crypt_ctx_cache:
+ kmem_cache_destroy(bio_fallback_crypt_ctx_cache);
+fail_free_bounce_page_pool:
+ mempool_destroy(blk_crypto_bounce_page_pool);
+fail_free_keyslots:
+ kfree(blk_crypto_keyslots);
+fail_free_wq:
+ destroy_workqueue(blk_crypto_wq);
+fail_free_ksm:
+ blk_ksm_destroy(&blk_crypto_ksm);
out:
- blk_crypto_free_fallback_crypt_ctx(bio);
- return false;
+ return err;
}
/*
@@ -508,7 +622,11 @@ int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
return 0;
mutex_lock(&tfms_init_lock);
- if (likely(tfms_inited[mode_num]))
+ if (tfms_inited[mode_num])
+ goto out;
+
+ err = blk_crypto_fallback_init();
+ if (err)
goto out;
for (i = 0; i < blk_crypto_num_keyslots; i++) {
@@ -546,100 +664,3 @@ int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
mutex_unlock(&tfms_init_lock);
return err;
}
-
-int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
-{
- return keyslot_manager_evict_key(blk_crypto_ksm, key);
-}
-
-int blk_crypto_fallback_submit_bio(struct bio **bio_ptr)
-{
- struct bio *bio = *bio_ptr;
- struct bio_crypt_ctx *bc = bio->bi_crypt_context;
- struct bio_fallback_crypt_ctx *f_ctx;
-
- if (bc->bc_key->is_hw_wrapped) {
- pr_warn_once("HW wrapped key cannot be used with fallback.\n");
- bio->bi_status = BLK_STS_NOTSUPP;
- return -EOPNOTSUPP;
- }
-
- if (!tfms_inited[bc->bc_key->crypto_mode]) {
- bio->bi_status = BLK_STS_IOERR;
- return -EIO;
- }
-
- if (bio_data_dir(bio) == WRITE)
- return blk_crypto_encrypt_bio(bio_ptr);
-
- /*
- * Mark bio as fallback crypted and replace the bio_crypt_ctx with
- * another one contained in a bio_fallback_crypt_ctx, so that the
- * fallback has space to store the info it needs for decryption.
- */
- bc->bc_ksm = blk_crypto_ksm;
- f_ctx = mempool_alloc(bio_fallback_crypt_ctx_pool, GFP_NOIO);
- f_ctx->crypt_ctx = *bc;
- memcpy(f_ctx->fallback_dun, bc->bc_dun, sizeof(f_ctx->fallback_dun));
- f_ctx->crypt_iter = bio->bi_iter;
-
- bio_crypt_free_ctx(bio);
- bio->bi_crypt_context = &f_ctx->crypt_ctx;
-
- return 0;
-}
-
-int __init blk_crypto_fallback_init(void)
-{
- int i;
- unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX];
-
- prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
-
- /* All blk-crypto modes have a crypto API fallback. */
- for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++)
- crypto_mode_supported[i] = 0xFFFFFFFF;
- crypto_mode_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
-
- blk_crypto_ksm = keyslot_manager_create(
- NULL, blk_crypto_num_keyslots,
- &blk_crypto_ksm_ll_ops,
- BLK_CRYPTO_FEATURE_STANDARD_KEYS,
- crypto_mode_supported, NULL);
- if (!blk_crypto_ksm)
- return -ENOMEM;
-
- blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
- WQ_UNBOUND | WQ_HIGHPRI |
- WQ_MEM_RECLAIM, num_online_cpus());
- if (!blk_crypto_wq)
- return -ENOMEM;
-
- blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots,
- sizeof(blk_crypto_keyslots[0]),
- GFP_KERNEL);
- if (!blk_crypto_keyslots)
- return -ENOMEM;
-
- blk_crypto_bounce_page_pool =
- mempool_create_page_pool(num_prealloc_bounce_pg, 0);
- if (!blk_crypto_bounce_page_pool)
- return -ENOMEM;
-
- blk_crypto_decrypt_work_cache = KMEM_CACHE(blk_crypto_decrypt_work,
- SLAB_RECLAIM_ACCOUNT);
- if (!blk_crypto_decrypt_work_cache)
- return -ENOMEM;
-
- bio_fallback_crypt_ctx_cache = KMEM_CACHE(bio_fallback_crypt_ctx, 0);
- if (!bio_fallback_crypt_ctx_cache)
- return -ENOMEM;
-
- bio_fallback_crypt_ctx_pool =
- mempool_create_slab_pool(num_prealloc_fallback_crypt_ctxs,
- bio_fallback_crypt_ctx_cache);
- if (!bio_fallback_crypt_ctx_pool)
- return -ENOMEM;
-
- return 0;
-}
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index 4da998c..d2b0f56 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -7,6 +7,7 @@
#define __LINUX_BLK_CRYPTO_INTERNAL_H
#include <linux/bio.h>
+#include <linux/blkdev.h>
/* Represents a crypto mode supported by blk-crypto */
struct blk_crypto_mode {
@@ -17,18 +18,162 @@ struct blk_crypto_mode {
extern const struct blk_crypto_mode blk_crypto_modes[];
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+ unsigned int inc);
+
+bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio);
+
+bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes,
+ struct bio_crypt_ctx *bc2);
+
+static inline bool bio_crypt_ctx_back_mergeable(struct request *req,
+ struct bio *bio)
+{
+ return bio_crypt_ctx_mergeable(req->crypt_ctx, blk_rq_bytes(req),
+ bio->bi_crypt_context);
+}
+
+static inline bool bio_crypt_ctx_front_mergeable(struct request *req,
+ struct bio *bio)
+{
+ return bio_crypt_ctx_mergeable(bio->bi_crypt_context,
+ bio->bi_iter.bi_size, req->crypt_ctx);
+}
+
+static inline bool bio_crypt_ctx_merge_rq(struct request *req,
+ struct request *next)
+{
+ return bio_crypt_ctx_mergeable(req->crypt_ctx, blk_rq_bytes(req),
+ next->crypt_ctx);
+}
+
+static inline void blk_crypto_rq_set_defaults(struct request *rq)
+{
+ rq->crypt_ctx = NULL;
+ rq->crypt_keyslot = NULL;
+}
+
+static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
+{
+ return rq->crypt_ctx;
+}
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline bool bio_crypt_rq_ctx_compatible(struct request *rq,
+ struct bio *bio)
+{
+ return true;
+}
+
+static inline bool bio_crypt_ctx_front_mergeable(struct request *req,
+ struct bio *bio)
+{
+ return true;
+}
+
+static inline bool bio_crypt_ctx_back_mergeable(struct request *req,
+ struct bio *bio)
+{
+ return true;
+}
+
+static inline bool bio_crypt_ctx_merge_rq(struct request *req,
+ struct request *next)
+{
+ return true;
+}
+
+static inline void blk_crypto_rq_set_defaults(struct request *rq) { }
+
+static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
+{
+ return false;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+ if (bio_has_crypt_ctx(bio))
+ __bio_crypt_advance(bio, bytes);
+}
+
+void __bio_crypt_free_ctx(struct bio *bio);
+static inline void bio_crypt_free_ctx(struct bio *bio)
+{
+ if (bio_has_crypt_ctx(bio))
+ __bio_crypt_free_ctx(bio);
+}
+
+static inline void bio_crypt_do_front_merge(struct request *rq,
+ struct bio *bio)
+{
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ if (bio_has_crypt_ctx(bio))
+ memcpy(rq->crypt_ctx->bc_dun, bio->bi_crypt_context->bc_dun,
+ sizeof(rq->crypt_ctx->bc_dun));
+#endif
+}
+
+bool __blk_crypto_bio_prep(struct bio **bio_ptr);
+static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
+{
+ if (bio_has_crypt_ctx(*bio_ptr))
+ return __blk_crypto_bio_prep(bio_ptr);
+ return true;
+}
+
+blk_status_t __blk_crypto_init_request(struct request *rq);
+static inline blk_status_t blk_crypto_init_request(struct request *rq)
+{
+ if (blk_crypto_rq_is_encrypted(rq))
+ return __blk_crypto_init_request(rq);
+ return BLK_STS_OK;
+}
+
+void __blk_crypto_free_request(struct request *rq);
+static inline void blk_crypto_free_request(struct request *rq)
+{
+ if (blk_crypto_rq_is_encrypted(rq))
+ __blk_crypto_free_request(rq);
+}
+
+void __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
+ gfp_t gfp_mask);
+static inline void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
+ gfp_t gfp_mask)
+{
+ if (bio_has_crypt_ctx(bio))
+ __blk_crypto_rq_bio_prep(rq, bio, gfp_mask);
+}
+
+/**
+ * blk_crypto_insert_cloned_request - Prepare a cloned request to be inserted
+ * into a request queue.
+ * @rq: the request being queued
+ *
+ * Return: BLK_STS_OK on success, nonzero on error.
+ */
+static inline blk_status_t blk_crypto_insert_cloned_request(struct request *rq)
+{
+
+ if (blk_crypto_rq_is_encrypted(rq))
+ return blk_crypto_init_request(rq);
+ return BLK_STS_OK;
+}
+
#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num);
-int blk_crypto_fallback_submit_bio(struct bio **bio_ptr);
-
-bool blk_crypto_queue_decrypt_bio(struct bio *bio);
+bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr);
int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
-bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc);
-
#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
static inline int
@@ -38,21 +183,10 @@ blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
return -ENOPKG;
}
-static inline bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc)
+static inline bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
{
- return false;
-}
-
-static inline int blk_crypto_fallback_submit_bio(struct bio **bio_ptr)
-{
- pr_warn_once("crypto API fallback disabled; failing request\n");
+ pr_warn_once("crypto API fallback disabled; failing request.\n");
(*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
- return -EIO;
-}
-
-static inline bool blk_crypto_queue_decrypt_bio(struct bio *bio)
-{
- WARN_ON(1);
return false;
}
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index e07a37c..aa3a05f 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -9,11 +9,11 @@
#define pr_fmt(fmt) "blk-crypto: " fmt
-#include <linux/blk-crypto.h>
+#include <linux/bio.h>
#include <linux/blkdev.h>
#include <linux/keyslot-manager.h>
-#include <linux/random.h>
-#include <linux/siphash.h>
+#include <linux/module.h>
+#include <linux/slab.h>
#include "blk-crypto-internal.h"
@@ -35,139 +35,266 @@ const struct blk_crypto_mode blk_crypto_modes[] = {
},
};
-/* Check that all I/O segments are data unit aligned */
-static int bio_crypt_check_alignment(struct bio *bio)
+/*
+ * This number needs to be at least (the number of threads doing IO
+ * concurrently) * (maximum recursive depth of a bio), so that we don't
+ * deadlock on crypt_ctx allocations. The default is chosen to be the same
+ * as the default number of post read contexts in both EXT4 and F2FS.
+ */
+static int num_prealloc_crypt_ctxs = 128;
+
+module_param(num_prealloc_crypt_ctxs, int, 0444);
+MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
+ "Number of bio crypto contexts to preallocate");
+
+static struct kmem_cache *bio_crypt_ctx_cache;
+static mempool_t *bio_crypt_ctx_pool;
+
+static int __init bio_crypt_ctx_init(void)
+{
+ size_t i;
+
+ bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0);
+ if (!bio_crypt_ctx_cache)
+ goto out_no_mem;
+
+ bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs,
+ bio_crypt_ctx_cache);
+ if (!bio_crypt_ctx_pool)
+ goto out_no_mem;
+
+ /* This is assumed in various places. */
+ BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0);
+
+ /* Sanity check that no algorithm exceeds the defined limits. */
+ for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
+ BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE);
+ BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE);
+ }
+
+ return 0;
+out_no_mem:
+ panic("Failed to allocate mem for bio crypt ctxs\n");
+}
+subsys_initcall(bio_crypt_ctx_init);
+
+void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key,
+ const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], gfp_t gfp_mask)
+{
+ struct bio_crypt_ctx *bc = mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
+
+ bc->bc_key = key;
+ memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun));
+
+ bio->bi_crypt_context = bc;
+}
+EXPORT_SYMBOL_GPL(bio_crypt_set_ctx);
+
+void __bio_crypt_free_ctx(struct bio *bio)
+{
+ mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
+ bio->bi_crypt_context = NULL;
+}
+
+void __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask)
+{
+ dst->bi_crypt_context = mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
+ *dst->bi_crypt_context = *src->bi_crypt_context;
+}
+EXPORT_SYMBOL_GPL(__bio_crypt_clone);
+
+/* Increments @dun by @inc, treating @dun as a multi-limb integer. */
+void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+ unsigned int inc)
+{
+ int i;
+
+ for (i = 0; inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE; i++) {
+ dun[i] += inc;
+ /*
+ * If the addition in this limb overflowed, then we need to
+ * carry 1 into the next limb. Else the carry is 0.
+ */
+ if (dun[i] < inc)
+ inc = 1;
+ else
+ inc = 0;
+ }
+}
+
+void __bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+ struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+
+ bio_crypt_dun_increment(bc->bc_dun,
+ bytes >> bc->bc_key->data_unit_size_bits);
+}
+
+/*
+ * Returns true if @bc->bc_dun plus @bytes converted to data units is equal to
+ * @next_dun, treating the DUNs as multi-limb integers.
+ */
+bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
+ unsigned int bytes,
+ const u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+ int i;
+ unsigned int carry = bytes >> bc->bc_key->data_unit_size_bits;
+
+ for (i = 0; i < BLK_CRYPTO_DUN_ARRAY_SIZE; i++) {
+ if (bc->bc_dun[i] + carry != next_dun[i])
+ return false;
+ /*
+ * If the addition in this limb overflowed, then we need to
+ * carry 1 into the next limb. Else the carry is 0.
+ */
+ if ((bc->bc_dun[i] + carry) < carry)
+ carry = 1;
+ else
+ carry = 0;
+ }
+
+ /* If the DUN wrapped through 0, don't treat it as contiguous. */
+ return carry == 0;
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible - i.e. that
+ * they are mergeable except for data_unit_num continuity.
+ */
+static bool bio_crypt_ctx_compatible(struct bio_crypt_ctx *bc1,
+ struct bio_crypt_ctx *bc2)
+{
+ if (!bc1)
+ return !bc2;
+
+ return bc2 && bc1->bc_key == bc2->bc_key;
+}
+
+bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio)
+{
+ return bio_crypt_ctx_compatible(rq->crypt_ctx, bio->bi_crypt_context);
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible, and also
+ * that their data_unit_nums are continuous (and can hence be merged)
+ * in the order @bc1 followed by @bc2.
+ */
+bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes,
+ struct bio_crypt_ctx *bc2)
+{
+ if (!bio_crypt_ctx_compatible(bc1, bc2))
+ return false;
+
+ return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun);
+}
+
+/* Check that all I/O segments are data unit aligned. */
+static bool bio_crypt_check_alignment(struct bio *bio)
{
const unsigned int data_unit_size =
- bio->bi_crypt_context->bc_key->data_unit_size;
+ bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size;
struct bvec_iter iter;
struct bio_vec bv;
bio_for_each_segment(bv, bio, iter) {
if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
- return -EIO;
+ return false;
}
- return 0;
+
+ return true;
+}
+
+blk_status_t __blk_crypto_init_request(struct request *rq)
+{
+ return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key,
+ &rq->crypt_keyslot);
}
/**
- * blk_crypto_submit_bio - handle submitting bio for inline encryption
+ * __blk_crypto_free_request - Uninitialize the crypto fields of a request.
+ *
+ * @rq: The request whose crypto fields to uninitialize.
+ *
+ * Completely uninitializes the crypto fields of a request. If a keyslot has
+ * been programmed into some inline encryption hardware, that keyslot is
+ * released. The rq->crypt_ctx is also freed.
+ */
+void __blk_crypto_free_request(struct request *rq)
+{
+ blk_ksm_put_slot(rq->crypt_keyslot);
+ mempool_free(rq->crypt_ctx, bio_crypt_ctx_pool);
+ blk_crypto_rq_set_defaults(rq);
+}
+
+/**
+ * __blk_crypto_bio_prep - Prepare bio for inline encryption
*
* @bio_ptr: pointer to original bio pointer
*
- * If the bio doesn't have inline encryption enabled or the submitter already
- * specified a keyslot for the target device, do nothing. Else, a raw key must
- * have been provided, so acquire a device keyslot for it if supported. Else,
- * use the crypto API fallback.
+ * If the bio crypt context provided for the bio is supported by the underlying
+ * device's inline encryption hardware, do nothing.
*
- * When the crypto API fallback is used for encryption, blk-crypto may choose to
- * split the bio into 2 - the first one that will continue to be processed and
- * the second one that will be resubmitted via generic_make_request.
- * A bounce bio will be allocated to encrypt the contents of the aforementioned
- * "first one", and *bio_ptr will be updated to this bounce bio.
+ * Otherwise, try to perform en/decryption for this bio by falling back to the
+ * kernel crypto API. When the crypto API fallback is used for encryption,
+ * blk-crypto may choose to split the bio into 2 - the first one that will
+ * continue to be processed and the second one that will be resubmitted via
+ * generic_make_request. A bounce bio will be allocated to encrypt the contents
+ * of the aforementioned "first one", and *bio_ptr will be updated to this
+ * bounce bio.
*
- * Return: 0 if bio submission should continue; nonzero if bio_endio() was
- * already called so bio submission should abort.
+ * Caller must ensure bio has bio_crypt_ctx.
+ *
+ * Return: true on success; false on error (and bio->bi_status will be set
+ * appropriately, and bio_endio() will have been called so bio
+ * submission should abort).
*/
-int blk_crypto_submit_bio(struct bio **bio_ptr)
+bool __blk_crypto_bio_prep(struct bio **bio_ptr)
{
struct bio *bio = *bio_ptr;
- struct request_queue *q;
- struct bio_crypt_ctx *bc = bio->bi_crypt_context;
- int err;
+ const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key;
- if (!bc || !bio_has_data(bio))
- return 0;
+ /* Error if bio has no data. */
+ if (WARN_ON_ONCE(!bio_has_data(bio))) {
+ bio->bi_status = BLK_STS_IOERR;
+ goto fail;
+ }
+
+ if (!bio_crypt_check_alignment(bio)) {
+ bio->bi_status = BLK_STS_IOERR;
+ goto fail;
+ }
/*
- * When a read bio is marked for fallback decryption, its bi_iter is
- * saved so that when we decrypt the bio later, we know what part of it
- * was marked for fallback decryption (when the bio is passed down after
- * blk_crypto_submit bio, it may be split or advanced so we cannot rely
- * on the bi_iter while decrypting in blk_crypto_endio)
+ * Success if device supports the encryption context, or if we succeeded
+ * in falling back to the crypto API.
*/
- if (bio_crypt_fallback_crypted(bc))
- return 0;
+ if (blk_ksm_crypto_cfg_supported(bio->bi_disk->queue->ksm,
+ &bc_key->crypto_cfg))
+ return true;
- err = bio_crypt_check_alignment(bio);
- if (err) {
- bio->bi_status = BLK_STS_IOERR;
- goto out;
- }
-
- q = bio->bi_disk->queue;
-
- if (bc->bc_ksm) {
- /* Key already programmed into device? */
- if (q->ksm == bc->bc_ksm)
- return 0;
-
- /* Nope, release the existing keyslot. */
- bio_crypt_ctx_release_keyslot(bc);
- }
-
- /* Get device keyslot if supported */
- if (keyslot_manager_crypto_mode_supported(q->ksm,
- bc->bc_key->crypto_mode,
- blk_crypto_key_dun_bytes(bc->bc_key),
- bc->bc_key->data_unit_size,
- bc->bc_key->is_hw_wrapped)) {
- err = bio_crypt_ctx_acquire_keyslot(bc, q->ksm);
- if (!err)
- return 0;
-
- pr_warn_once("Failed to acquire keyslot for %s (err=%d). Falling back to crypto API.\n",
- bio->bi_disk->disk_name, err);
- }
-
- /* Fallback to crypto API */
- err = blk_crypto_fallback_submit_bio(bio_ptr);
- if (err)
- goto out;
-
- return 0;
-out:
+ if (blk_crypto_fallback_bio_prep(bio_ptr))
+ return true;
+fail:
bio_endio(*bio_ptr);
- return err;
+ return false;
}
/**
- * blk_crypto_endio - clean up bio w.r.t inline encryption during bio_endio
+ * __blk_crypto_rq_bio_prep - Prepare a request's crypt_ctx when its first bio
+ * is inserted
*
- * @bio: the bio to clean up
- *
- * If blk_crypto_submit_bio decided to fallback to crypto API for this bio,
- * we queue the bio for decryption into a workqueue and return false,
- * and call bio_endio(bio) at a later time (after the bio has been decrypted).
- *
- * If the bio is not to be decrypted by the crypto API, this function releases
- * the reference to the keyslot that blk_crypto_submit_bio got.
- *
- * Return: true if bio_endio should continue; false otherwise (bio_endio will
- * be called again when bio has been decrypted).
+ * @rq: The request to prepare
+ * @bio: The first bio being inserted into the request
+ * @gfp_mask: gfp mask
*/
-bool blk_crypto_endio(struct bio *bio)
+void __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
+ gfp_t gfp_mask)
{
- struct bio_crypt_ctx *bc = bio->bi_crypt_context;
-
- if (!bc)
- return true;
-
- if (bio_crypt_fallback_crypted(bc)) {
- /*
- * The only bios who's crypto is handled by the blk-crypto
- * fallback when they reach here are those with
- * bio_data_dir(bio) == READ, since WRITE bios that are
- * encrypted by the crypto API fallback are handled by
- * blk_crypto_encrypt_endio().
- */
- return !blk_crypto_queue_decrypt_bio(bio);
- }
-
- if (bc->bc_keyslot >= 0)
- bio_crypt_ctx_release_keyslot(bc);
-
- return true;
+ if (!rq->crypt_ctx)
+ rq->crypt_ctx = mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
+ *rq->crypt_ctx = *bio->bi_crypt_context;
}
/**
@@ -185,8 +312,8 @@ bool blk_crypto_endio(struct bio *bio)
* key is used
* @data_unit_size: the data unit size to use for en/decryption
*
- * Return: The blk_crypto_key that was prepared, or an ERR_PTR() on error. When
- * done using the key, it must be freed with blk_crypto_free_key().
+ * Return: 0 on success, -errno on failure. The caller is responsible for
+ * zeroizing both blk_key and raw_key when done with them.
*/
int blk_crypto_init_key(struct blk_crypto_key *blk_key,
const u8 *raw_key, unsigned int raw_key_size,
@@ -196,8 +323,6 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key,
unsigned int data_unit_size)
{
const struct blk_crypto_mode *mode;
- static siphash_key_t hash_key;
- u32 hash;
memset(blk_key, 0, sizeof(*blk_key));
@@ -216,91 +341,88 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key,
return -EINVAL;
}
- if (dun_bytes <= 0 || dun_bytes > BLK_CRYPTO_MAX_IV_SIZE)
+ if (dun_bytes == 0 || dun_bytes > BLK_CRYPTO_MAX_IV_SIZE)
return -EINVAL;
if (!is_power_of_2(data_unit_size))
return -EINVAL;
- blk_key->crypto_mode = crypto_mode;
- blk_key->data_unit_size = data_unit_size;
+ blk_key->crypto_cfg.crypto_mode = crypto_mode;
+ blk_key->crypto_cfg.dun_bytes = dun_bytes;
+ blk_key->crypto_cfg.data_unit_size = data_unit_size;
+ blk_key->crypto_cfg.is_hw_wrapped = is_hw_wrapped;
blk_key->data_unit_size_bits = ilog2(data_unit_size);
blk_key->size = raw_key_size;
- blk_key->is_hw_wrapped = is_hw_wrapped;
memcpy(blk_key->raw, raw_key, raw_key_size);
- /*
- * The keyslot manager uses the SipHash of the key to implement O(1) key
- * lookups while avoiding leaking information about the keys. It's
- * precomputed here so that it only needs to be computed once per key.
- */
- get_random_once(&hash_key, sizeof(hash_key));
- hash = (u32)siphash(raw_key, raw_key_size, &hash_key);
- blk_crypto_key_set_hash_and_dun_bytes(blk_key, hash, dun_bytes);
-
return 0;
}
EXPORT_SYMBOL_GPL(blk_crypto_init_key);
+/*
+ * Check if bios with @cfg can be en/decrypted by blk-crypto (i.e. either the
+ * request queue it's submitted to supports inline crypto, or the
+ * blk-crypto-fallback is enabled and supports the cfg).
+ */
+bool blk_crypto_config_supported(struct request_queue *q,
+ const struct blk_crypto_config *cfg)
+{
+ return IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) ||
+ blk_ksm_crypto_cfg_supported(q->ksm, cfg);
+}
+
/**
- * blk_crypto_start_using_mode() - Start using blk-crypto on a device
- * @crypto_mode: the crypto mode that will be used
- * @dun_bytes: number of bytes that will be used to specify the DUN
- * @data_unit_size: the data unit size that will be used
- * @is_hw_wrapped_key: whether the key will be hardware-wrapped
+ * blk_crypto_start_using_key() - Start using a blk_crypto_key on a device
+ * @key: A key to use on the device
* @q: the request queue for the device
*
* Upper layers must call this function to ensure that either the hardware
- * supports the needed crypto settings, or the crypto API fallback has
- * transforms for the needed mode allocated and ready to go.
+ * supports the key's crypto settings, or the crypto API fallback has transforms
+ * for the needed mode allocated and ready to go. This function may allocate
+ * an skcipher, and *should not* be called from the data path, since that might
+ * cause a deadlock
*
- * Return: 0 on success; -ENOPKG if the hardware doesn't support the crypto
- * settings and blk-crypto-fallback is either disabled or the needed
- * algorithm is disabled in the crypto API; or another -errno code.
+ * Return: 0 on success; -ENOPKG if the hardware doesn't support the key and
+ * blk-crypto-fallback is either disabled or the needed algorithm
+ * is disabled in the crypto API; or another -errno code.
*/
-int blk_crypto_start_using_mode(enum blk_crypto_mode_num crypto_mode,
- unsigned int dun_bytes,
- unsigned int data_unit_size,
- bool is_hw_wrapped_key,
- struct request_queue *q)
+int blk_crypto_start_using_key(const struct blk_crypto_key *key,
+ struct request_queue *q)
{
- if (keyslot_manager_crypto_mode_supported(q->ksm, crypto_mode,
- dun_bytes, data_unit_size,
- is_hw_wrapped_key))
+ if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg))
return 0;
- if (is_hw_wrapped_key) {
+ if (key->crypto_cfg.is_hw_wrapped) {
pr_warn_once("hardware doesn't support wrapped keys\n");
return -EOPNOTSUPP;
}
- return blk_crypto_fallback_start_using_mode(crypto_mode);
+ return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode);
}
-EXPORT_SYMBOL_GPL(blk_crypto_start_using_mode);
+EXPORT_SYMBOL_GPL(blk_crypto_start_using_key);
/**
* blk_crypto_evict_key() - Evict a key from any inline encryption hardware
* it may have been programmed into
- * @q: The request queue who's keyslot manager this key might have been
- * programmed into
+ * @q: The request queue who's associated inline encryption hardware this key
+ * might have been programmed into
* @key: The key to evict
*
- * Upper layers (filesystems) should call this function to ensure that a key
- * is evicted from hardware that it might have been programmed into. This
- * will call keyslot_manager_evict_key on the queue's keyslot manager, if one
- * exists, and supports the crypto algorithm with the specified data unit size.
- * Otherwise, it will evict the key from the blk-crypto-fallback's ksm.
+ * Upper layers (filesystems) must call this function to ensure that a key is
+ * evicted from any hardware that it might have been programmed into. The key
+ * must not be in use by any in-flight IO when this function is called.
*
- * Return: 0 on success, -err on error.
+ * Return: 0 on success or if key is not present in the q's ksm, -err on error.
*/
int blk_crypto_evict_key(struct request_queue *q,
const struct blk_crypto_key *key)
{
- if (q->ksm &&
- keyslot_manager_crypto_mode_supported(q->ksm, key->crypto_mode,
- blk_crypto_key_dun_bytes(key),
- key->data_unit_size,
- key->is_hw_wrapped))
- return keyslot_manager_evict_key(q->ksm, key);
+ if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg))
+ return blk_ksm_evict_key(q->ksm, key);
+ /*
+ * If the request queue's associated inline encryption hardware didn't
+ * have support for the key, then the key might have been programmed
+ * into the fallback keyslot manager, so try to evict from there.
+ */
return blk_crypto_fallback_evict_key(key);
}
EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
diff --git a/block/blk-integrity.c b/block/blk-integrity.c
index ff1070e..c03705c 100644
--- a/block/blk-integrity.c
+++ b/block/blk-integrity.c
@@ -409,6 +409,13 @@ void blk_integrity_register(struct gendisk *disk, struct blk_integrity *template
bi->tag_size = template->tag_size;
disk->queue->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ if (disk->queue->ksm) {
+ pr_warn("blk-integrity: Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n");
+ blk_ksm_unregister(disk->queue);
+ }
+#endif
}
EXPORT_SYMBOL(blk_integrity_register);
diff --git a/block/blk-map.c b/block/blk-map.c
index b72c361..92e23e5 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -549,6 +549,7 @@ int blk_rq_append_bio(struct request *rq, struct bio **bio)
rq->biotail->bi_next = *bio;
rq->biotail = *bio;
rq->__data_len += (*bio)->bi_iter.bi_size;
+ bio_crypt_free_ctx(*bio);
}
return 0;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 842e6d5..a0c24b6 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -596,13 +596,13 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
if (blk_integrity_rq(req) &&
integrity_req_gap_back_merge(req, bio))
return 0;
+ if (!bio_crypt_ctx_back_mergeable(req, bio))
+ return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
req_set_nomerge(req->q, req);
return 0;
}
- if (!bio_crypt_ctx_mergeable(req->bio, blk_rq_bytes(req), bio))
- return 0;
return ll_new_hw_segment(req, bio, nr_segs);
}
@@ -614,13 +614,13 @@ int ll_front_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs
if (blk_integrity_rq(req) &&
integrity_req_gap_front_merge(req, bio))
return 0;
+ if (!bio_crypt_ctx_front_mergeable(req, bio))
+ return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
req_set_nomerge(req->q, req);
return 0;
}
- if (!bio_crypt_ctx_mergeable(bio, bio->bi_iter.bi_size, req->bio))
- return 0;
return ll_new_hw_segment(req, bio, nr_segs);
}
@@ -665,7 +665,7 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
if (blk_integrity_merge_rq(q, req, next) == false)
return 0;
- if (!bio_crypt_ctx_mergeable(req->bio, blk_rq_bytes(req), next->bio))
+ if (!bio_crypt_ctx_merge_rq(req, next))
return 0;
/* Merge is OK... */
@@ -892,6 +892,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
if (blk_integrity_merge_bio(rq->q, rq, bio) == false)
return false;
+ /* Only merge if the crypt contexts are compatible */
+ if (!bio_crypt_rq_ctx_compatible(rq, bio))
+ return false;
+
/* must be using the same buffer */
if (req_op(rq) == REQ_OP_WRITE_SAME &&
!blk_write_same_mergeable(rq->bio, bio))
@@ -907,10 +911,6 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
if (rq->ioprio != bio_prio(bio))
return false;
- /* Only merge if the crypt contexts are compatible */
- if (!bio_crypt_ctx_compatible(bio, rq->bio))
- return false;
-
return true;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a7785df2..4f8d283 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -26,6 +26,7 @@
#include <linux/delay.h>
#include <linux/crash_dump.h>
#include <linux/prefetch.h>
+#include <linux/blk-crypto.h>
#include <trace/events/block.h>
@@ -317,6 +318,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
#if defined(CONFIG_BLK_DEV_INTEGRITY)
rq->nr_integrity_segments = 0;
#endif
+ blk_crypto_rq_set_defaults(rq);
/* tag was already set */
rq->extra_len = 0;
WRITE_ONCE(rq->deadline, 0);
@@ -474,6 +476,7 @@ static void __blk_mq_free_request(struct request *rq)
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
const int sched_tag = rq->internal_tag;
+ blk_crypto_free_request(rq);
blk_pm_mark_last_busy(rq);
rq->mq_hctx = NULL;
if (rq->tag != -1)
@@ -1782,6 +1785,7 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
rq->__sector = bio->bi_iter.bi_sector;
rq->write_hint = bio->bi_write_hint;
blk_rq_bio_prep(rq, bio, nr_segs);
+ blk_crypto_rq_bio_prep(rq, bio, GFP_NOIO);
blk_account_io_start(rq, true);
}
@@ -1983,6 +1987,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
struct request *same_queue_rq = NULL;
unsigned int nr_segs;
blk_qc_t cookie;
+ blk_status_t ret;
blk_queue_bounce(q, &bio);
__blk_queue_split(q, &bio, &nr_segs);
@@ -2016,6 +2021,14 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
blk_mq_bio_to_request(rq, bio, nr_segs);
+ ret = blk_crypto_init_request(rq);
+ if (ret != BLK_STS_OK) {
+ bio->bi_status = ret;
+ bio_endio(bio);
+ blk_mq_free_request(rq);
+ return BLK_QC_T_NONE;
+ }
+
plug = blk_mq_plug(q, bio);
if (unlikely(is_flush_fua)) {
/* Bypass scheduler for flush requests */
diff --git a/block/blk.h b/block/blk.h
index 0a94ec6..1f524ae 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -5,7 +5,9 @@
#include <linux/idr.h>
#include <linux/blk-mq.h>
#include <linux/part_stat.h>
+#include <linux/blk-crypto.h>
#include <xen/xen.h>
+#include "blk-crypto-internal.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
diff --git a/block/bounce.c b/block/bounce.c
index aa57ccc..c3aaed0 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -269,10 +269,14 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
bio_crypt_clone(bio, bio_src, gfp_mask);
- if (bio_integrity(bio_src) &&
- bio_integrity_clone(bio, bio_src, gfp_mask) < 0) {
- bio_put(bio);
- return NULL;
+ if (bio_integrity(bio_src)) {
+ int ret;
+
+ ret = bio_integrity_clone(bio, bio_src, gfp_mask);
+ if (ret < 0) {
+ bio_put(bio);
+ return NULL;
+ }
}
bio_clone_blkg_association(bio, bio_src);
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index 74b7485..69d6be7 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -10,7 +10,7 @@
* into which encryption contexts may be programmed, and requests can be tagged
* with a slot number to specify the key to use for en/decryption.
*
- * As the number of slots are limited, and programming keys is expensive on
+ * As the number of slots is limited, and programming keys is expensive on
* many inline encryption hardware, we don't want to program the same key into
* multiple slots - if multiple requests are using the same key, we want to
* program just one slot with that key and use that slot for all requests.
@@ -22,10 +22,12 @@
* and tell it how to perform device specific operations like programming/
* evicting keys from keyslots.
*
- * Upper layers will call keyslot_manager_get_slot_for_key() to program a
+ * Upper layers will call blk_ksm_get_slot_for_key() to program a
* key into some slot in the inline encryption hardware.
*/
-#include <crypto/algapi.h>
+
+#define pr_fmt(fmt) "blk-crypto: " fmt
+
#include <linux/keyslot-manager.h>
#include <linux/atomic.h>
#include <linux/mutex.h>
@@ -33,159 +35,64 @@
#include <linux/wait.h>
#include <linux/blkdev.h>
-struct keyslot {
+struct blk_ksm_keyslot {
atomic_t slot_refs;
struct list_head idle_slot_node;
struct hlist_node hash_node;
- struct blk_crypto_key key;
+ const struct blk_crypto_key *key;
+ struct blk_keyslot_manager *ksm;
};
-struct keyslot_manager {
- unsigned int num_slots;
- struct keyslot_mgmt_ll_ops ksm_ll_ops;
- unsigned int features;
- unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX];
- unsigned int max_dun_bytes_supported;
- void *ll_priv_data;
-
-#ifdef CONFIG_PM
- /* Device for runtime power management (NULL if none) */
- struct device *dev;
-#endif
-
- /* Protects programming and evicting keys from the device */
- struct rw_semaphore lock;
-
- /* List of idle slots, with least recently used slot at front */
- wait_queue_head_t idle_slots_wait_queue;
- struct list_head idle_slots;
- spinlock_t idle_slots_lock;
-
- /*
- * Hash table which maps key hashes to keyslots, so that we can find a
- * key's keyslot in O(1) time rather than O(num_slots). Protected by
- * 'lock'. A cryptographic hash function is used so that timing attacks
- * can't leak information about the raw keys.
- */
- struct hlist_head *slot_hashtable;
- unsigned int slot_hashtable_size;
-
- /* Per-keyslot data */
- struct keyslot slots[];
-};
-
-static inline bool keyslot_manager_is_passthrough(struct keyslot_manager *ksm)
-{
- return ksm->num_slots == 0;
-}
-
-#ifdef CONFIG_PM
-static inline void keyslot_manager_set_dev(struct keyslot_manager *ksm,
- struct device *dev)
-{
- ksm->dev = dev;
-}
-
-/* If there's an underlying device and it's suspended, resume it. */
-static inline void keyslot_manager_pm_get(struct keyslot_manager *ksm)
-{
- if (ksm->dev)
- pm_runtime_get_sync(ksm->dev);
-}
-
-static inline void keyslot_manager_pm_put(struct keyslot_manager *ksm)
-{
- if (ksm->dev)
- pm_runtime_put_sync(ksm->dev);
-}
-#else /* CONFIG_PM */
-static inline void keyslot_manager_set_dev(struct keyslot_manager *ksm,
- struct device *dev)
-{
-}
-
-static inline void keyslot_manager_pm_get(struct keyslot_manager *ksm)
-{
-}
-
-static inline void keyslot_manager_pm_put(struct keyslot_manager *ksm)
-{
-}
-#endif /* !CONFIG_PM */
-
-static inline void keyslot_manager_hw_enter(struct keyslot_manager *ksm)
+static inline void blk_ksm_hw_enter(struct blk_keyslot_manager *ksm)
{
/*
* Calling into the driver requires ksm->lock held and the device
* resumed. But we must resume the device first, since that can acquire
- * and release ksm->lock via keyslot_manager_reprogram_all_keys().
+ * and release ksm->lock via blk_ksm_reprogram_all_keys().
*/
- keyslot_manager_pm_get(ksm);
+ if (ksm->dev)
+ pm_runtime_get_sync(ksm->dev);
down_write(&ksm->lock);
}
-static inline void keyslot_manager_hw_exit(struct keyslot_manager *ksm)
+static inline void blk_ksm_hw_exit(struct blk_keyslot_manager *ksm)
{
up_write(&ksm->lock);
- keyslot_manager_pm_put(ksm);
+ if (ksm->dev)
+ pm_runtime_put_sync(ksm->dev);
+}
+
+static inline bool blk_ksm_is_passthrough(struct blk_keyslot_manager *ksm)
+{
+ return ksm->num_slots == 0;
}
/**
- * keyslot_manager_create() - Create a keyslot manager
- * @dev: Device for runtime power management (NULL if none)
+ * blk_ksm_init() - Initialize a keyslot manager
+ * @ksm: The keyslot_manager to initialize.
* @num_slots: The number of key slots to manage.
- * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
- * manager will use to perform operations like programming and
- * evicting keys.
- * @features: The supported features as a bitmask of BLK_CRYPTO_FEATURE_* flags.
- * Most drivers should set BLK_CRYPTO_FEATURE_STANDARD_KEYS here.
- * @crypto_mode_supported: Array of size BLK_ENCRYPTION_MODE_MAX of
- * bitmasks that represents whether a crypto mode
- * and data unit size are supported. The i'th bit
- * of crypto_mode_supported[crypto_mode] is set iff
- * a data unit size of (1 << i) is supported. We
- * only support data unit sizes that are powers of
- * 2.
- * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
*
- * Allocate memory for and initialize a keyslot manager. Called by e.g.
- * storage drivers to set up a keyslot manager in their request_queue.
+ * Allocate memory for keyslots and initialize a keyslot manager. Called by
+ * e.g. storage drivers to set up a keyslot manager in their request_queue.
*
- * Context: May sleep
- * Return: Pointer to constructed keyslot manager or NULL on error.
+ * Return: 0 on success, or else a negative error code.
*/
-struct keyslot_manager *keyslot_manager_create(
- struct device *dev,
- unsigned int num_slots,
- const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
- unsigned int features,
- const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
- void *ll_priv_data)
+int blk_ksm_init(struct blk_keyslot_manager *ksm, unsigned int num_slots)
{
- struct keyslot_manager *ksm;
unsigned int slot;
unsigned int i;
+ unsigned int slot_hashtable_size;
+
+ memset(ksm, 0, sizeof(*ksm));
if (num_slots == 0)
- return NULL;
+ return -EINVAL;
- /* Check that all ops are specified */
- if (ksm_ll_ops->keyslot_program == NULL ||
- ksm_ll_ops->keyslot_evict == NULL)
- return NULL;
-
- ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL);
- if (!ksm)
- return NULL;
+ ksm->slots = kvcalloc(num_slots, sizeof(ksm->slots[0]), GFP_KERNEL);
+ if (!ksm->slots)
+ return -ENOMEM;
ksm->num_slots = num_slots;
- ksm->ksm_ll_ops = *ksm_ll_ops;
- ksm->features = features;
- memcpy(ksm->crypto_mode_supported, crypto_mode_supported,
- sizeof(ksm->crypto_mode_supported));
- ksm->max_dun_bytes_supported = BLK_CRYPTO_MAX_IV_SIZE;
- ksm->ll_priv_data = ll_priv_data;
- keyslot_manager_set_dev(ksm, dev);
init_rwsem(&ksm->lock);
@@ -193,120 +100,125 @@ struct keyslot_manager *keyslot_manager_create(
INIT_LIST_HEAD(&ksm->idle_slots);
for (slot = 0; slot < num_slots; slot++) {
+ ksm->slots[slot].ksm = ksm;
list_add_tail(&ksm->slots[slot].idle_slot_node,
&ksm->idle_slots);
}
spin_lock_init(&ksm->idle_slots_lock);
- ksm->slot_hashtable_size = roundup_pow_of_two(num_slots);
- ksm->slot_hashtable = kvmalloc_array(ksm->slot_hashtable_size,
+ slot_hashtable_size = roundup_pow_of_two(num_slots);
+ ksm->log_slot_ht_size = ilog2(slot_hashtable_size);
+ ksm->slot_hashtable = kvmalloc_array(slot_hashtable_size,
sizeof(ksm->slot_hashtable[0]),
GFP_KERNEL);
if (!ksm->slot_hashtable)
- goto err_free_ksm;
- for (i = 0; i < ksm->slot_hashtable_size; i++)
+ goto err_destroy_ksm;
+ for (i = 0; i < slot_hashtable_size; i++)
INIT_HLIST_HEAD(&ksm->slot_hashtable[i]);
- return ksm;
+ return 0;
-err_free_ksm:
- keyslot_manager_destroy(ksm);
- return NULL;
+err_destroy_ksm:
+ blk_ksm_destroy(ksm);
+ return -ENOMEM;
}
-EXPORT_SYMBOL_GPL(keyslot_manager_create);
-
-void keyslot_manager_set_max_dun_bytes(struct keyslot_manager *ksm,
- unsigned int max_dun_bytes)
-{
- ksm->max_dun_bytes_supported = max_dun_bytes;
-}
-EXPORT_SYMBOL_GPL(keyslot_manager_set_max_dun_bytes);
+EXPORT_SYMBOL_GPL(blk_ksm_init);
static inline struct hlist_head *
-hash_bucket_for_key(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key)
+blk_ksm_hash_bucket_for_key(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
{
- return &ksm->slot_hashtable[blk_crypto_key_hash(key) &
- (ksm->slot_hashtable_size - 1)];
+ return &ksm->slot_hashtable[hash_ptr(key, ksm->log_slot_ht_size)];
}
-static void remove_slot_from_lru_list(struct keyslot_manager *ksm, int slot)
+static void blk_ksm_remove_slot_from_lru_list(struct blk_ksm_keyslot *slot)
{
+ struct blk_keyslot_manager *ksm = slot->ksm;
unsigned long flags;
spin_lock_irqsave(&ksm->idle_slots_lock, flags);
- list_del(&ksm->slots[slot].idle_slot_node);
+ list_del(&slot->idle_slot_node);
spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
}
-static int find_keyslot(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key)
+static struct blk_ksm_keyslot *blk_ksm_find_keyslot(
+ struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
{
- const struct hlist_head *head = hash_bucket_for_key(ksm, key);
- const struct keyslot *slotp;
+ const struct hlist_head *head = blk_ksm_hash_bucket_for_key(ksm, key);
+ struct blk_ksm_keyslot *slotp;
hlist_for_each_entry(slotp, head, hash_node) {
- if (slotp->key.hash == key->hash &&
- slotp->key.crypto_mode == key->crypto_mode &&
- slotp->key.size == key->size &&
- slotp->key.data_unit_size == key->data_unit_size &&
- !crypto_memneq(slotp->key.raw, key->raw, key->size))
- return slotp - ksm->slots;
+ if (slotp->key == key)
+ return slotp;
}
- return -ENOKEY;
+ return NULL;
}
-static int find_and_grab_keyslot(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key)
+static struct blk_ksm_keyslot *blk_ksm_find_and_grab_keyslot(
+ struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
{
- int slot;
+ struct blk_ksm_keyslot *slot;
- slot = find_keyslot(ksm, key);
- if (slot < 0)
- return slot;
- if (atomic_inc_return(&ksm->slots[slot].slot_refs) == 1) {
+ slot = blk_ksm_find_keyslot(ksm, key);
+ if (!slot)
+ return NULL;
+ if (atomic_inc_return(&slot->slot_refs) == 1) {
/* Took first reference to this slot; remove it from LRU list */
- remove_slot_from_lru_list(ksm, slot);
+ blk_ksm_remove_slot_from_lru_list(slot);
}
return slot;
}
+unsigned int blk_ksm_get_slot_idx(struct blk_ksm_keyslot *slot)
+{
+ return slot - slot->ksm->slots;
+}
+EXPORT_SYMBOL_GPL(blk_ksm_get_slot_idx);
+
/**
- * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
+ * blk_ksm_get_slot_for_key() - Program a key into a keyslot.
* @ksm: The keyslot manager to program the key into.
* @key: Pointer to the key object to program, including the raw key, crypto
* mode, and data unit size.
+ * @slot_ptr: A pointer to return the pointer of the allocated keyslot.
*
* Get a keyslot that's been programmed with the specified key. If one already
* exists, return it with incremented refcount. Otherwise, wait for a keyslot
* to become idle and program it.
*
* Context: Process context. Takes and releases ksm->lock.
- * Return: The keyslot on success, else a -errno value.
+ * Return: BLK_STS_OK on success (and keyslot is set to the pointer of the
+ * allocated keyslot), or some other blk_status_t otherwise (and
+ * keyslot is set to NULL).
*/
-int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key)
+blk_status_t blk_ksm_get_slot_for_key(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ struct blk_ksm_keyslot **slot_ptr)
{
- int slot;
+ struct blk_ksm_keyslot *slot;
+ int slot_idx;
int err;
- struct keyslot *idle_slot;
- if (keyslot_manager_is_passthrough(ksm))
- return 0;
+ *slot_ptr = NULL;
+
+ if (blk_ksm_is_passthrough(ksm))
+ return BLK_STS_OK;
down_read(&ksm->lock);
- slot = find_and_grab_keyslot(ksm, key);
+ slot = blk_ksm_find_and_grab_keyslot(ksm, key);
up_read(&ksm->lock);
- if (slot != -ENOKEY)
- return slot;
+ if (slot)
+ goto success;
for (;;) {
- keyslot_manager_hw_enter(ksm);
- slot = find_and_grab_keyslot(ksm, key);
- if (slot != -ENOKEY) {
- keyslot_manager_hw_exit(ksm);
- return slot;
+ blk_ksm_hw_enter(ksm);
+ slot = blk_ksm_find_and_grab_keyslot(ksm, key);
+ if (slot) {
+ blk_ksm_hw_exit(ksm);
+ goto success;
}
/*
@@ -316,182 +228,146 @@ int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
if (!list_empty(&ksm->idle_slots))
break;
- keyslot_manager_hw_exit(ksm);
+ blk_ksm_hw_exit(ksm);
wait_event(ksm->idle_slots_wait_queue,
!list_empty(&ksm->idle_slots));
}
- idle_slot = list_first_entry(&ksm->idle_slots, struct keyslot,
- idle_slot_node);
- slot = idle_slot - ksm->slots;
+ slot = list_first_entry(&ksm->idle_slots, struct blk_ksm_keyslot,
+ idle_slot_node);
+ slot_idx = blk_ksm_get_slot_idx(slot);
- err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot);
+ err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot_idx);
if (err) {
wake_up(&ksm->idle_slots_wait_queue);
- keyslot_manager_hw_exit(ksm);
- return err;
+ blk_ksm_hw_exit(ksm);
+ return errno_to_blk_status(err);
}
/* Move this slot to the hash list for the new key. */
- if (idle_slot->key.crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
- hlist_del(&idle_slot->hash_node);
- hlist_add_head(&idle_slot->hash_node, hash_bucket_for_key(ksm, key));
+ if (slot->key)
+ hlist_del(&slot->hash_node);
+ slot->key = key;
+ hlist_add_head(&slot->hash_node, blk_ksm_hash_bucket_for_key(ksm, key));
- atomic_set(&idle_slot->slot_refs, 1);
- idle_slot->key = *key;
+ atomic_set(&slot->slot_refs, 1);
- remove_slot_from_lru_list(ksm, slot);
+ blk_ksm_remove_slot_from_lru_list(slot);
- keyslot_manager_hw_exit(ksm);
- return slot;
+ blk_ksm_hw_exit(ksm);
+success:
+ *slot_ptr = slot;
+ return BLK_STS_OK;
}
/**
- * keyslot_manager_get_slot() - Increment the refcount on the specified slot.
- * @ksm: The keyslot manager that we want to modify.
- * @slot: The slot to increment the refcount of.
- *
- * This function assumes that there is already an active reference to that slot
- * and simply increments the refcount. This is useful when cloning a bio that
- * already has a reference to a keyslot, and we want the cloned bio to also have
- * its own reference.
+ * blk_ksm_put_slot() - Release a reference to a slot
+ * @slot: The keyslot to release the reference of.
*
* Context: Any context.
*/
-void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
+void blk_ksm_put_slot(struct blk_ksm_keyslot *slot)
{
- if (keyslot_manager_is_passthrough(ksm))
- return;
-
- if (WARN_ON(slot >= ksm->num_slots))
- return;
-
- WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2);
-}
-
-/**
- * keyslot_manager_put_slot() - Release a reference to a slot
- * @ksm: The keyslot manager to release the reference from.
- * @slot: The slot to release the reference from.
- *
- * Context: Any context.
- */
-void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
-{
+ struct blk_keyslot_manager *ksm;
unsigned long flags;
- if (keyslot_manager_is_passthrough(ksm))
+ if (!slot)
return;
- if (WARN_ON(slot >= ksm->num_slots))
- return;
+ ksm = slot->ksm;
- if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs,
+ if (atomic_dec_and_lock_irqsave(&slot->slot_refs,
&ksm->idle_slots_lock, flags)) {
- list_add_tail(&ksm->slots[slot].idle_slot_node,
- &ksm->idle_slots);
+ list_add_tail(&slot->idle_slot_node, &ksm->idle_slots);
spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
wake_up(&ksm->idle_slots_wait_queue);
}
}
/**
- * keyslot_manager_crypto_mode_supported() - Find out if a crypto_mode /
- * data unit size / is_hw_wrapped_key
- * combination is supported by a ksm.
+ * blk_ksm_crypto_cfg_supported() - Find out if a crypto configuration is
+ * supported by a ksm.
* @ksm: The keyslot manager to check
- * @crypto_mode: The crypto mode to check for.
- * @dun_bytes: The number of bytes that will be used to specify the DUN
- * @data_unit_size: The data_unit_size for the mode.
- * @is_hw_wrapped_key: Whether a hardware-wrapped key will be used.
+ * @cfg: The crypto configuration to check for.
*
- * Calls and returns the result of the crypto_mode_supported function specified
- * by the ksm.
+ * Checks for crypto_mode/data unit size/dun bytes support.
*
- * Context: Process context.
- * Return: Whether or not this ksm supports the specified crypto settings.
+ * Return: Whether or not this ksm supports the specified crypto config.
*/
-bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
- enum blk_crypto_mode_num crypto_mode,
- unsigned int dun_bytes,
- unsigned int data_unit_size,
- bool is_hw_wrapped_key)
+bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_config *cfg)
{
if (!ksm)
return false;
- if (WARN_ON(crypto_mode >= BLK_ENCRYPTION_MODE_MAX))
+ if (!(ksm->crypto_modes_supported[cfg->crypto_mode] &
+ cfg->data_unit_size))
return false;
- if (WARN_ON(!is_power_of_2(data_unit_size)))
+ if (ksm->max_dun_bytes_supported < cfg->dun_bytes)
return false;
- if (is_hw_wrapped_key) {
+ if (cfg->is_hw_wrapped) {
if (!(ksm->features & BLK_CRYPTO_FEATURE_WRAPPED_KEYS))
return false;
} else {
if (!(ksm->features & BLK_CRYPTO_FEATURE_STANDARD_KEYS))
return false;
}
- if (!(ksm->crypto_mode_supported[crypto_mode] & data_unit_size))
- return false;
-
- return ksm->max_dun_bytes_supported >= dun_bytes;
+ return true;
}
/**
- * keyslot_manager_evict_key() - Evict a key from the lower layer device.
+ * blk_ksm_evict_key() - Evict a key from the lower layer device.
* @ksm: The keyslot manager to evict from
* @key: The key to evict
*
* Find the keyslot that the specified key was programmed into, and evict that
- * slot from the lower layer device if that slot is not currently in use.
+ * slot from the lower layer device. The slot must not be in use by any
+ * in-flight IO when this function is called.
*
* Context: Process context. Takes and releases ksm->lock.
- * Return: 0 on success, -EBUSY if the key is still in use, or another
- * -errno value on other error.
+ * Return: 0 on success or if there's no keyslot with the specified key, -EBUSY
+ * if the keyslot is still in use, or another -errno value on other
+ * error.
*/
-int keyslot_manager_evict_key(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key)
+int blk_ksm_evict_key(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key)
{
- int slot;
- int err;
- struct keyslot *slotp;
+ struct blk_ksm_keyslot *slot;
+ int err = 0;
- if (keyslot_manager_is_passthrough(ksm)) {
+ if (blk_ksm_is_passthrough(ksm)) {
if (ksm->ksm_ll_ops.keyslot_evict) {
- keyslot_manager_hw_enter(ksm);
+ blk_ksm_hw_enter(ksm);
err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, -1);
- keyslot_manager_hw_exit(ksm);
+ blk_ksm_hw_exit(ksm);
return err;
}
return 0;
}
- keyslot_manager_hw_enter(ksm);
-
- slot = find_keyslot(ksm, key);
- if (slot < 0) {
- err = slot;
+ blk_ksm_hw_enter(ksm);
+ slot = blk_ksm_find_keyslot(ksm, key);
+ if (!slot)
goto out_unlock;
- }
- slotp = &ksm->slots[slot];
- if (atomic_read(&slotp->slot_refs) != 0) {
+ if (WARN_ON_ONCE(atomic_read(&slot->slot_refs) != 0)) {
err = -EBUSY;
goto out_unlock;
}
- err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, slot);
+ err = ksm->ksm_ll_ops.keyslot_evict(ksm, key,
+ blk_ksm_get_slot_idx(slot));
if (err)
goto out_unlock;
- hlist_del(&slotp->hash_node);
- memzero_explicit(&slotp->key, sizeof(slotp->key));
+ hlist_del(&slot->hash_node);
+ slot->key = NULL;
err = 0;
out_unlock:
- keyslot_manager_hw_exit(ksm);
+ blk_ksm_hw_exit(ksm);
return err;
}
/**
- * keyslot_manager_reprogram_all_keys() - Re-program all keyslots.
+ * blk_ksm_reprogram_all_keys() - Re-program all keyslots.
* @ksm: The keyslot manager
*
* Re-program all keyslots that are supposed to have a key programmed. This is
@@ -499,133 +375,59 @@ int keyslot_manager_evict_key(struct keyslot_manager *ksm,
*
* Context: Process context. Takes and releases ksm->lock.
*/
-void keyslot_manager_reprogram_all_keys(struct keyslot_manager *ksm)
+void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm)
{
unsigned int slot;
- if (WARN_ON(keyslot_manager_is_passthrough(ksm)))
+ if (WARN_ON(blk_ksm_is_passthrough(ksm)))
return;
/* This is for device initialization, so don't resume the device */
down_write(&ksm->lock);
for (slot = 0; slot < ksm->num_slots; slot++) {
- const struct keyslot *slotp = &ksm->slots[slot];
+ const struct blk_crypto_key *key = ksm->slots[slot].key;
int err;
- if (slotp->key.crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+ if (!key)
continue;
- err = ksm->ksm_ll_ops.keyslot_program(ksm, &slotp->key, slot);
+ err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot);
WARN_ON(err);
}
up_write(&ksm->lock);
}
-EXPORT_SYMBOL_GPL(keyslot_manager_reprogram_all_keys);
+EXPORT_SYMBOL_GPL(blk_ksm_reprogram_all_keys);
-/**
- * keyslot_manager_private() - return the private data stored with ksm
- * @ksm: The keyslot manager
- *
- * Returns the private data passed to the ksm when it was created.
- */
-void *keyslot_manager_private(struct keyslot_manager *ksm)
+void blk_ksm_destroy(struct blk_keyslot_manager *ksm)
{
- return ksm->ll_priv_data;
-}
-EXPORT_SYMBOL_GPL(keyslot_manager_private);
-
-void keyslot_manager_destroy(struct keyslot_manager *ksm)
-{
- if (ksm) {
- kvfree(ksm->slot_hashtable);
- memzero_explicit(ksm, struct_size(ksm, slots, ksm->num_slots));
- kvfree(ksm);
- }
-}
-EXPORT_SYMBOL_GPL(keyslot_manager_destroy);
-
-/**
- * keyslot_manager_create_passthrough() - Create a passthrough keyslot manager
- * @dev: Device for runtime power management (NULL if none)
- * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops
- * @features: Bitmask of BLK_CRYPTO_FEATURE_* flags
- * @crypto_mode_supported: Bitmasks for supported encryption modes
- * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
- *
- * Allocate memory for and initialize a passthrough keyslot manager.
- * Called by e.g. storage drivers to set up a keyslot manager in their
- * request_queue, when the storage driver wants to manage its keys by itself.
- * This is useful for inline encryption hardware that don't have a small fixed
- * number of keyslots, and for layered devices.
- *
- * See keyslot_manager_create() for more details about the parameters.
- *
- * Context: This function may sleep
- * Return: Pointer to constructed keyslot manager or NULL on error.
- */
-struct keyslot_manager *keyslot_manager_create_passthrough(
- struct device *dev,
- const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
- unsigned int features,
- const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
- void *ll_priv_data)
-{
- struct keyslot_manager *ksm;
-
- ksm = kzalloc(sizeof(*ksm), GFP_KERNEL);
if (!ksm)
- return NULL;
-
- ksm->ksm_ll_ops = *ksm_ll_ops;
- ksm->features = features;
- memcpy(ksm->crypto_mode_supported, crypto_mode_supported,
- sizeof(ksm->crypto_mode_supported));
- ksm->max_dun_bytes_supported = BLK_CRYPTO_MAX_IV_SIZE;
- ksm->ll_priv_data = ll_priv_data;
- keyslot_manager_set_dev(ksm, dev);
-
- init_rwsem(&ksm->lock);
-
- return ksm;
+ return;
+ kvfree(ksm->slot_hashtable);
+ memzero_explicit(ksm->slots, sizeof(ksm->slots[0]) * ksm->num_slots);
+ kvfree(ksm->slots);
+ memzero_explicit(ksm, sizeof(*ksm));
}
-EXPORT_SYMBOL_GPL(keyslot_manager_create_passthrough);
+EXPORT_SYMBOL_GPL(blk_ksm_destroy);
-/**
- * keyslot_manager_intersect_modes() - restrict supported modes by child device
- * @parent: The keyslot manager for parent device
- * @child: The keyslot manager for child device, or NULL
- *
- * Clear any crypto mode support bits in @parent that aren't set in @child.
- * If @child is NULL, then all parent bits are cleared.
- *
- * Only use this when setting up the keyslot manager for a layered device,
- * before it's been exposed yet.
- */
-void keyslot_manager_intersect_modes(struct keyslot_manager *parent,
- const struct keyslot_manager *child)
+bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q)
{
- if (child) {
- unsigned int i;
-
- parent->features &= child->features;
- parent->max_dun_bytes_supported =
- min(parent->max_dun_bytes_supported,
- child->max_dun_bytes_supported);
- for (i = 0; i < ARRAY_SIZE(child->crypto_mode_supported); i++) {
- parent->crypto_mode_supported[i] &=
- child->crypto_mode_supported[i];
- }
- } else {
- parent->features = 0;
- parent->max_dun_bytes_supported = 0;
- memset(parent->crypto_mode_supported, 0,
- sizeof(parent->crypto_mode_supported));
+ if (blk_integrity_queue_supports_integrity(q)) {
+ pr_warn("Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n");
+ return false;
}
+ q->ksm = ksm;
+ return true;
}
-EXPORT_SYMBOL_GPL(keyslot_manager_intersect_modes);
+EXPORT_SYMBOL_GPL(blk_ksm_register);
+
+void blk_ksm_unregister(struct request_queue *q)
+{
+ q->ksm = NULL;
+}
+EXPORT_SYMBOL_GPL(blk_ksm_unregister);
/**
- * keyslot_manager_derive_raw_secret() - Derive software secret from wrapped key
+ * blk_ksm_derive_raw_secret() - Derive software secret from wrapped key
* @ksm: The keyslot manager
* @wrapped_key: The wrapped key
* @wrapped_key_size: Size of the wrapped key in bytes
@@ -641,23 +443,76 @@ EXPORT_SYMBOL_GPL(keyslot_manager_intersect_modes);
* Return: 0 on success, -EOPNOTSUPP if hardware-wrapped keys are unsupported,
* or another -errno code.
*/
-int keyslot_manager_derive_raw_secret(struct keyslot_manager *ksm,
- const u8 *wrapped_key,
- unsigned int wrapped_key_size,
- u8 *secret, unsigned int secret_size)
+int blk_ksm_derive_raw_secret(struct blk_keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size)
{
int err;
if (ksm->ksm_ll_ops.derive_raw_secret) {
- keyslot_manager_hw_enter(ksm);
+ blk_ksm_hw_enter(ksm);
err = ksm->ksm_ll_ops.derive_raw_secret(ksm, wrapped_key,
wrapped_key_size,
secret, secret_size);
- keyslot_manager_hw_exit(ksm);
+ blk_ksm_hw_exit(ksm);
} else {
err = -EOPNOTSUPP;
}
return err;
}
-EXPORT_SYMBOL_GPL(keyslot_manager_derive_raw_secret);
+EXPORT_SYMBOL_GPL(blk_ksm_derive_raw_secret);
+
+/**
+ * blk_ksm_intersect_modes() - restrict supported modes by child device
+ * @parent: The keyslot manager for parent device
+ * @child: The keyslot manager for child device, or NULL
+ *
+ * Clear any crypto mode support bits in @parent that aren't set in @child.
+ * If @child is NULL, then all parent bits are cleared.
+ *
+ * Only use this when setting up the keyslot manager for a layered device,
+ * before it's been exposed yet.
+ */
+void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
+ const struct blk_keyslot_manager *child)
+{
+ if (child) {
+ unsigned int i;
+
+ parent->max_dun_bytes_supported =
+ min(parent->max_dun_bytes_supported,
+ child->max_dun_bytes_supported);
+ parent->features &= child->features;
+ for (i = 0; i < ARRAY_SIZE(child->crypto_modes_supported); i++) {
+ parent->crypto_modes_supported[i] &=
+ child->crypto_modes_supported[i];
+ }
+ } else {
+ parent->max_dun_bytes_supported = 0;
+ parent->features = 0;
+ memset(parent->crypto_modes_supported, 0,
+ sizeof(parent->crypto_modes_supported));
+ }
+}
+EXPORT_SYMBOL_GPL(blk_ksm_intersect_modes);
+
+/**
+ * blk_ksm_init_passthrough() - Init a passthrough keyslot manager
+ * @ksm: The keyslot manager to init
+ *
+ * Initialize a passthrough keyslot manager.
+ * Called by e.g. storage drivers to set up a keyslot manager in their
+ * request_queue, when the storage driver wants to manage its keys by itself.
+ * This is useful for inline encryption hardware that don't have a small fixed
+ * number of keyslots, and for layered devices.
+ *
+ * See blk_ksm_init() for more details about the parameters.
+ */
+void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm)
+{
+ memset(ksm, 0, sizeof(*ksm));
+ init_rwsem(&ksm->lock);
+}
+EXPORT_SYMBOL_GPL(blk_ksm_init_passthrough);
diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
index c4ef1fc..4542050 100644
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -12,6 +12,7 @@
#include <linux/kthread.h>
#include <linux/ktime.h>
#include <linux/blk-mq.h>
+#include <linux/keyslot-manager.h>
#include <trace/events/block.h>
@@ -49,6 +50,9 @@ struct mapped_device {
int numa_node_id;
struct request_queue *queue;
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ struct blk_keyslot_manager ksm;
+#endif
atomic_t holders;
atomic_t open_count;
diff --git a/drivers/md/dm-default-key.c b/drivers/md/dm-default-key.c
index 0b10ab3..8af272e 100644
--- a/drivers/md/dm-default-key.c
+++ b/drivers/md/dm-default-key.c
@@ -245,9 +245,7 @@ static int default_key_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad;
}
- err = blk_crypto_start_using_mode(cipher->mode_num, dun_bytes,
- dkc->sector_size, dkc->is_hw_wrapped,
- dkc->dev->bdev->bd_queue);
+ err = blk_crypto_start_using_key(&dkc->key, dkc->dev->bdev->bd_queue);
if (err) {
ti->error = "Error starting to use blk-crypto";
goto bad;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 4060712..b0a27e9 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1624,10 +1624,10 @@ static int device_intersect_crypto_modes(struct dm_target *ti,
struct dm_dev *dev, sector_t start,
sector_t len, void *data)
{
- struct keyslot_manager *parent = data;
- struct keyslot_manager *child = bdev_get_queue(dev->bdev)->ksm;
+ struct blk_keyslot_manager *parent = data;
+ struct blk_keyslot_manager *child = bdev_get_queue(dev->bdev)->ksm;
- keyslot_manager_intersect_modes(parent, child);
+ blk_ksm_intersect_modes(parent, child);
return 0;
}
@@ -1651,7 +1651,7 @@ static void dm_calculate_supported_crypto_modes(struct dm_table *t,
ti = dm_table_get_target(t, i);
if (!ti->may_passthrough_inline_crypto) {
- keyslot_manager_intersect_modes(q->ksm, NULL);
+ blk_ksm_intersect_modes(q->ksm, NULL);
return;
}
if (!ti->type->iterate_devices)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 2d879ec..452fab8 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1340,6 +1340,7 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
if (bio_integrity(bio)) {
int r;
+
if (unlikely(!dm_target_has_integrity(tio->ti->type) &&
!dm_target_passes_integrity(tio->ti->type))) {
DMWARN("%s: the target %s doesn't support integrity data.",
@@ -2301,10 +2302,10 @@ static int dm_keyslot_evict_callback(struct dm_target *ti, struct dm_dev *dev,
* When an inline encryption key is evicted from a device-mapper device, evict
* it from all the underlying devices.
*/
-static int dm_keyslot_evict(struct keyslot_manager *ksm,
+static int dm_keyslot_evict(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key, unsigned int slot)
{
- struct mapped_device *md = keyslot_manager_private(ksm);
+ struct mapped_device *md = container_of(ksm, struct mapped_device, ksm);
struct dm_keyslot_evict_args args = { key };
struct dm_table *t;
int srcu_idx;
@@ -2347,10 +2348,10 @@ static int dm_derive_raw_secret_callback(struct dm_target *ti,
return 0;
}
- args->err = keyslot_manager_derive_raw_secret(q->ksm, args->wrapped_key,
- args->wrapped_key_size,
- args->secret,
- args->secret_size);
+ args->err = blk_ksm_derive_raw_secret(q->ksm, args->wrapped_key,
+ args->wrapped_key_size,
+ args->secret,
+ args->secret_size);
/* Try another device in case this fails. */
return 0;
}
@@ -2360,12 +2361,12 @@ static int dm_derive_raw_secret_callback(struct dm_target *ti,
* only only one raw_secret can exist for a particular wrappedkey,
* retrieve it only from the first device that supports derive_raw_secret()
*/
-static int dm_derive_raw_secret(struct keyslot_manager *ksm,
+static int dm_derive_raw_secret(struct blk_keyslot_manager *ksm,
const u8 *wrapped_key,
unsigned int wrapped_key_size,
u8 *secret, unsigned int secret_size)
{
- struct mapped_device *md = keyslot_manager_private(ksm);
+ struct mapped_device *md = container_of(ksm, struct mapped_device, ksm);
struct dm_derive_raw_secret_args args = {
.wrapped_key = wrapped_key,
.wrapped_key_size = wrapped_key_size,
@@ -2394,43 +2395,38 @@ static int dm_derive_raw_secret(struct keyslot_manager *ksm,
return args.err;
}
-static struct keyslot_mgmt_ll_ops dm_ksm_ll_ops = {
+static struct blk_ksm_ll_ops dm_ksm_ll_ops = {
.keyslot_evict = dm_keyslot_evict,
.derive_raw_secret = dm_derive_raw_secret,
};
-static int dm_init_inline_encryption(struct mapped_device *md)
+static void dm_init_inline_encryption(struct mapped_device *md)
{
- unsigned int features;
- unsigned int mode_masks[BLK_ENCRYPTION_MODE_MAX];
+ blk_ksm_init_passthrough(&md->ksm);
+ md->ksm.ksm_ll_ops = dm_ksm_ll_ops;
/*
- * Initially declare support for all crypto settings. Anything
+ * Initially declare support for all crypto settings. Anything
* unsupported by a child device will be removed later when calculating
* the device restrictions.
*/
- features = BLK_CRYPTO_FEATURE_STANDARD_KEYS |
- BLK_CRYPTO_FEATURE_WRAPPED_KEYS;
- memset(mode_masks, 0xFF, sizeof(mode_masks));
+ md->ksm.max_dun_bytes_supported = UINT_MAX;
+ md->ksm.features = BLK_CRYPTO_FEATURE_STANDARD_KEYS |
+ BLK_CRYPTO_FEATURE_WRAPPED_KEYS;
+ memset(md->ksm.crypto_modes_supported, 0xFF,
+ sizeof(md->ksm.crypto_modes_supported));
- md->queue->ksm = keyslot_manager_create_passthrough(NULL,
- &dm_ksm_ll_ops,
- features,
- mode_masks, md);
- if (!md->queue->ksm)
- return -ENOMEM;
- return 0;
+ blk_ksm_register(&md->ksm, md->queue);
}
static void dm_destroy_inline_encryption(struct request_queue *q)
{
- keyslot_manager_destroy(q->ksm);
- q->ksm = NULL;
+ blk_ksm_destroy(q->ksm);
+ blk_ksm_unregister(q);
}
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
-static inline int dm_init_inline_encryption(struct mapped_device *md)
+static inline void dm_init_inline_encryption(struct mapped_device *md)
{
- return 0;
}
static inline void dm_destroy_inline_encryption(struct request_queue *q)
@@ -2478,11 +2474,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
return r;
}
- r = dm_init_inline_encryption(md);
- if (r) {
- DMERR("Cannot initialize inline encryption");
- return r;
- }
+ dm_init_inline_encryption(md);
dm_table_set_restrictions(t, md->queue, &limits);
blk_register_queue(md->disk);
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index e88cdcd..197e178 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -7,9 +7,9 @@
obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o
ufshcd-core-y += ufshcd.o ufs-sysfs.o
ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
obj-$(CONFIG_SCSI_UFS_MEDIATEK) += ufs-mediatek.o
obj-$(CONFIG_SCSI_UFS_TI_J721E) += ti-j721e-ufs.o
-ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
index ed86abe..074a6a0 100644
--- a/drivers/scsi/ufs/ufs-hisi.c
+++ b/drivers/scsi/ufs/ufs-hisi.c
@@ -475,14 +475,6 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
if (!host)
return -ENOMEM;
- /*
- * Inline crypto is currently broken with ufs-hisi because the keyslots
- * overlap with the vendor-specific SYS CTRL registers -- and even if
- * software uses only non-overlapping keyslots, the kernel crashes when
- * programming a key or a UFS error occurs on the first encrypted I/O.
- */
- hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
-
host->hba = hba;
ufshcd_set_variant(hba, host);
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 8b31774..57b9832 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -1062,13 +1062,6 @@ static void ufs_qcom_advertise_quirks(struct ufs_hba *hba)
| UFSHCD_QUIRK_DME_PEER_ACCESS_AUTO_MODE
| UFSHCD_QUIRK_BROKEN_PA_RXHSUNTERMCAP);
}
-
- /*
- * Inline crypto is currently broken with ufs-qcom at least because the
- * device tree doesn't include the crypto registers. There are likely
- * to be other issues that will need to be addressed too.
- */
- hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
}
static void ufs_qcom_set_caps(struct ufs_hba *hba)
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
index 43d105b..b52f0b9 100644
--- a/drivers/scsi/ufs/ufshcd-crypto.c
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -3,123 +3,23 @@
* Copyright 2019 Google LLC
*/
-#include <linux/keyslot-manager.h>
#include "ufshcd.h"
#include "ufshcd-crypto.h"
-static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
-{
- return cap_idx < hba->crypto_capabilities.num_crypto_cap;
-}
-
-static u8 get_data_unit_size_mask(unsigned int data_unit_size)
-{
- if (data_unit_size < 512 || data_unit_size > 65536 ||
- !is_power_of_2(data_unit_size))
- return 0;
-
- return data_unit_size / 512;
-}
-
-static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
-{
- switch (size) {
- case UFS_CRYPTO_KEY_SIZE_128:
- return 16;
- case UFS_CRYPTO_KEY_SIZE_192:
- return 24;
- case UFS_CRYPTO_KEY_SIZE_256:
- return 32;
- case UFS_CRYPTO_KEY_SIZE_512:
- return 64;
- default:
- return 0;
- }
-}
-
-int ufshcd_crypto_cap_find(struct ufs_hba *hba,
- enum blk_crypto_mode_num crypto_mode,
- unsigned int data_unit_size)
-{
+/* Blk-crypto modes supported by UFS crypto */
+static const struct ufs_crypto_alg_entry {
enum ufs_crypto_alg ufs_alg;
- u8 data_unit_mask;
- int cap_idx;
enum ufs_crypto_key_size ufs_key_size;
- union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
-
- if (!ufshcd_hba_is_crypto_supported(hba))
- return -EINVAL;
-
- switch (crypto_mode) {
- case BLK_ENCRYPTION_MODE_AES_256_XTS:
- ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
- ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
- break;
- default:
- return -EINVAL;
- }
-
- data_unit_mask = get_data_unit_size_mask(data_unit_size);
-
- for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
- cap_idx++) {
- if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
- (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
- ccap_array[cap_idx].key_size == ufs_key_size)
- return cap_idx;
- }
-
- return -EINVAL;
-}
-EXPORT_SYMBOL_GPL(ufshcd_crypto_cap_find);
-
-/**
- * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
- *
- * Writes the key with the appropriate format - for AES_XTS,
- * the first half of the key is copied as is, the second half is
- * copied with an offset halfway into the cfg->crypto_key array.
- * For the other supported crypto algs, the key is just copied.
- *
- * @cfg: The crypto config to write to
- * @key: The key to write
- * @cap: The crypto capability (which specifies the crypto alg and key size)
- *
- * Returns 0 on success, or -EINVAL
- */
-static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
- const u8 *key,
- union ufs_crypto_cap_entry cap)
-{
- size_t key_size_bytes = get_keysize_bytes(cap.key_size);
-
- if (key_size_bytes == 0)
- return -EINVAL;
-
- switch (cap.algorithm_id) {
- case UFS_CRYPTO_ALG_AES_XTS:
- key_size_bytes *= 2;
- if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
- return -EINVAL;
-
- memcpy(cfg->crypto_key, key, key_size_bytes/2);
- memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
- key + key_size_bytes/2, key_size_bytes/2);
- return 0;
- case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC:
- /* fall through */
- case UFS_CRYPTO_ALG_AES_ECB:
- /* fall through */
- case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
- memcpy(cfg->crypto_key, key, key_size_bytes);
- return 0;
- }
-
- return -EINVAL;
-}
+} ufs_crypto_algs[BLK_ENCRYPTION_MODE_MAX] = {
+ [BLK_ENCRYPTION_MODE_AES_256_XTS] = {
+ .ufs_alg = UFS_CRYPTO_ALG_AES_XTS,
+ .ufs_key_size = UFS_CRYPTO_KEY_SIZE_256,
+ },
+};
static int ufshcd_program_key(struct ufs_hba *hba,
- const union ufs_crypto_cfg_entry *cfg, int slot)
+ const union ufs_crypto_cfg_entry *cfg,
+ int slot)
{
int i;
u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
@@ -132,81 +32,63 @@ static int ufshcd_program_key(struct ufs_hba *hba,
goto out;
}
- /* Clear the dword 16 */
- ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
/* Ensure that CFGE is cleared before programming the key */
- wmb();
+ ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
for (i = 0; i < 16; i++) {
ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
slot_offset + i * sizeof(cfg->reg_val[0]));
- /* Spec says each dword in key must be written sequentially */
- wmb();
}
/* Write dword 17 */
ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
slot_offset + 17 * sizeof(cfg->reg_val[0]));
/* Dword 16 must be written last */
- wmb();
- /* Write dword 16 */
ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
slot_offset + 16 * sizeof(cfg->reg_val[0]));
- wmb();
err = 0;
out:
ufshcd_release(hba);
return err;
}
-static void ufshcd_clear_keyslot(struct ufs_hba *hba, int slot)
-{
- union ufs_crypto_cfg_entry cfg = { 0 };
- int err;
-
- err = ufshcd_program_key(hba, &cfg, slot);
- WARN_ON_ONCE(err);
-}
-
-/* Clear all keyslots at driver init time */
-static void ufshcd_clear_all_keyslots(struct ufs_hba *hba)
-{
- int slot;
-
- for (slot = 0; slot < ufshcd_num_keyslots(hba); slot++)
- ufshcd_clear_keyslot(hba, slot);
-}
-
-static int ufshcd_crypto_keyslot_program(struct keyslot_manager *ksm,
+static int ufshcd_crypto_keyslot_program(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key,
unsigned int slot)
{
- struct ufs_hba *hba = keyslot_manager_private(ksm);
- int err = 0;
- u8 data_unit_mask;
- union ufs_crypto_cfg_entry cfg;
- int cap_idx;
+ struct ufs_hba *hba = container_of(ksm, struct ufs_hba, ksm);
+ const union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
+ const struct ufs_crypto_alg_entry *alg =
+ &ufs_crypto_algs[key->crypto_cfg.crypto_mode];
+ u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512;
+ int i;
+ int cap_idx = -1;
+ union ufs_crypto_cfg_entry cfg = { 0 };
+ int err;
- cap_idx = ufshcd_crypto_cap_find(hba, key->crypto_mode,
- key->data_unit_size);
+ BUILD_BUG_ON(UFS_CRYPTO_KEY_SIZE_INVALID != 0);
+ for (i = 0; i < hba->crypto_capabilities.num_crypto_cap; i++) {
+ if (ccap_array[i].algorithm_id == alg->ufs_alg &&
+ ccap_array[i].key_size == alg->ufs_key_size &&
+ (ccap_array[i].sdus_mask & data_unit_mask)) {
+ cap_idx = i;
+ break;
+ }
+ }
- if (!ufshcd_is_crypto_enabled(hba) ||
- !ufshcd_keyslot_valid(hba, slot) ||
- !ufshcd_cap_idx_valid(hba, cap_idx))
- return -EINVAL;
+ if (WARN_ON(cap_idx < 0))
+ return -EOPNOTSUPP;
- data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
-
- if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask))
- return -EINVAL;
-
- memset(&cfg, 0, sizeof(cfg));
cfg.data_unit_size = data_unit_mask;
cfg.crypto_cap_idx = cap_idx;
- cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
+ cfg.config_enable = UFS_CRYPTO_CONFIGURATION_ENABLE;
- err = ufshcd_crypto_cfg_entry_write_key(&cfg, key->raw,
- hba->crypto_cap_array[cap_idx]);
- if (err)
- return err;
+ if (ccap_array[cap_idx].algorithm_id == UFS_CRYPTO_ALG_AES_XTS) {
+ /* In XTS mode, the blk_crypto_key's size is already doubled */
+ memcpy(cfg.crypto_key, key->raw, key->size/2);
+ memcpy(cfg.crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
+ key->raw + key->size/2, key->size/2);
+ } else {
+ memcpy(cfg.crypto_key, key->raw, key->size);
+ }
err = ufshcd_program_key(hba, &cfg, slot);
@@ -215,60 +97,60 @@ static int ufshcd_crypto_keyslot_program(struct keyslot_manager *ksm,
return err;
}
-static int ufshcd_crypto_keyslot_evict(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key,
- unsigned int slot)
-{
- struct ufs_hba *hba = keyslot_manager_private(ksm);
- if (!ufshcd_is_crypto_enabled(hba) ||
- !ufshcd_keyslot_valid(hba, slot))
- return -EINVAL;
+static void ufshcd_clear_keyslot(struct ufs_hba *hba, int slot)
+{
+ union ufs_crypto_cfg_entry cfg = { 0 };
+ int err;
/*
* Clear the crypto cfg on the device. Clearing CFGE
* might not be sufficient, so just clear the entire cfg.
*/
+ err = ufshcd_program_key(hba, &cfg, slot);
+ WARN_ON_ONCE(err);
+}
+
+static int ufshcd_crypto_keyslot_evict(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ unsigned int slot)
+{
+ struct ufs_hba *hba = container_of(ksm, struct ufs_hba, ksm);
+
ufshcd_clear_keyslot(hba, slot);
return 0;
}
/* Functions implementing UFSHCI v2.1 specification behaviour */
-void ufshcd_crypto_enable_spec(struct ufs_hba *hba)
+bool ufshcd_crypto_enable_spec(struct ufs_hba *hba)
{
- if (!ufshcd_hba_is_crypto_supported(hba))
- return;
-
- hba->caps |= UFSHCD_CAP_CRYPTO;
+ if (!(hba->caps & UFSHCD_CAP_CRYPTO))
+ return false;
/* Reset might clear all keys, so reprogram all the keys. */
- keyslot_manager_reprogram_all_keys(hba->ksm);
+ blk_ksm_reprogram_all_keys(&hba->ksm);
+ return true;
}
-EXPORT_SYMBOL_GPL(ufshcd_crypto_enable_spec);
+EXPORT_SYMBOL(ufshcd_crypto_enable_spec);
-void ufshcd_crypto_disable_spec(struct ufs_hba *hba)
-{
- hba->caps &= ~UFSHCD_CAP_CRYPTO;
-}
-EXPORT_SYMBOL_GPL(ufshcd_crypto_disable_spec);
-
-static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
+static const struct blk_ksm_ll_ops ufshcd_ksm_ops = {
.keyslot_program = ufshcd_crypto_keyslot_program,
.keyslot_evict = ufshcd_crypto_keyslot_evict,
};
-enum blk_crypto_mode_num ufshcd_blk_crypto_mode_num_for_alg_dusize(
- enum ufs_crypto_alg ufs_crypto_alg,
- enum ufs_crypto_key_size key_size)
+static enum blk_crypto_mode_num
+ufshcd_find_blk_crypto_mode(union ufs_crypto_cap_entry cap)
{
- /*
- * This is currently the only mode that UFS and blk-crypto both support.
- */
- if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
- key_size == UFS_CRYPTO_KEY_SIZE_256)
- return BLK_ENCRYPTION_MODE_AES_256_XTS;
+ int i;
+ for (i = 0; i < ARRAY_SIZE(ufs_crypto_algs); i++) {
+ BUILD_BUG_ON(UFS_CRYPTO_KEY_SIZE_INVALID != 0);
+ if (ufs_crypto_algs[i].ufs_alg == cap.algorithm_id &&
+ ufs_crypto_algs[i].ufs_key_size == cap.key_size) {
+ return i;
+ }
+ }
return BLK_ENCRYPTION_MODE_INVALID;
}
@@ -279,44 +161,50 @@ enum blk_crypto_mode_num ufshcd_blk_crypto_mode_num_for_alg_dusize(
* Return: 0 if crypto was initialized or is not supported, else a -errno value.
*/
int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
- const struct keyslot_mgmt_ll_ops *ksm_ops)
+ const struct blk_ksm_ll_ops *ksm_ops)
{
int cap_idx = 0;
int err = 0;
- unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
enum blk_crypto_mode_num blk_mode_num;
-
- /* Default to disabling crypto */
- hba->caps &= ~UFSHCD_CAP_CRYPTO;
-
- /* Return 0 if crypto support isn't present */
- if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
- (hba->quirks & UFSHCD_QUIRK_BROKEN_CRYPTO))
- goto out;
+ int slot = 0;
+ int num_keyslots;
/*
- * Crypto Capabilities should never be 0, because the
- * config_array_ptr > 04h. So we use a 0 value to indicate that
- * crypto init failed, and can't be enabled.
+ * Don't use crypto if either the hardware doesn't advertise the
+ * standard crypto capability bit *or* if the vendor specific driver
+ * hasn't advertised that crypto is supported.
*/
+ if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
+ !(hba->caps & UFSHCD_CAP_CRYPTO))
+ goto out;
+
hba->crypto_capabilities.reg_val =
cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
hba->crypto_cfg_register =
(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
hba->crypto_cap_array =
- devm_kcalloc(hba->dev,
- hba->crypto_capabilities.num_crypto_cap,
- sizeof(hba->crypto_cap_array[0]),
- GFP_KERNEL);
+ devm_kcalloc(hba->dev, hba->crypto_capabilities.num_crypto_cap,
+ sizeof(hba->crypto_cap_array[0]), GFP_KERNEL);
if (!hba->crypto_cap_array) {
err = -ENOMEM;
goto out;
}
- memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+ /* The actual number of configurations supported is (CFGC+1) */
+ num_keyslots = hba->crypto_capabilities.config_count + 1;
+ err = blk_ksm_init(&hba->ksm, num_keyslots);
+ if (err)
+ goto out_free_caps;
+
+ hba->ksm.ksm_ll_ops = *ksm_ops;
+ /* UFS only supports 8 bytes for any DUN */
+ hba->ksm.max_dun_bytes_supported = 8;
+ hba->ksm.features = BLK_CRYPTO_FEATURE_STANDARD_KEYS;
+ hba->ksm.dev = hba->dev;
+
/*
- * Store all the capabilities now so that we don't need to repeatedly
- * access the device each time we want to know its capabilities
+ * Cache all the UFS crypto capabilities and advertise the supported
+ * crypto modes and data unit sizes to the block layer.
*/
for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
cap_idx++) {
@@ -324,89 +212,44 @@ int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
cpu_to_le32(ufshcd_readl(hba,
REG_UFS_CRYPTOCAP +
cap_idx * sizeof(__le32)));
- blk_mode_num = ufshcd_blk_crypto_mode_num_for_alg_dusize(
- hba->crypto_cap_array[cap_idx].algorithm_id,
- hba->crypto_cap_array[cap_idx].key_size);
- if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
- continue;
- crypto_modes_supported[blk_mode_num] |=
- hba->crypto_cap_array[cap_idx].sdus_mask * 512;
+ blk_mode_num = ufshcd_find_blk_crypto_mode(
+ hba->crypto_cap_array[cap_idx]);
+ if (blk_mode_num != BLK_ENCRYPTION_MODE_INVALID)
+ hba->ksm.crypto_modes_supported[blk_mode_num] |=
+ hba->crypto_cap_array[cap_idx].sdus_mask * 512;
}
- ufshcd_clear_all_keyslots(hba);
-
- hba->ksm = keyslot_manager_create(hba->dev, ufshcd_num_keyslots(hba),
- ksm_ops,
- BLK_CRYPTO_FEATURE_STANDARD_KEYS,
- crypto_modes_supported, hba);
-
- if (!hba->ksm) {
- err = -ENOMEM;
- goto out_free_caps;
- }
- keyslot_manager_set_max_dun_bytes(hba->ksm, sizeof(u64));
+ for (slot = 0; slot < num_keyslots; slot++)
+ ufshcd_clear_keyslot(hba, slot);
return 0;
out_free_caps:
devm_kfree(hba->dev, hba->crypto_cap_array);
out:
- /* Indicate that init failed by setting crypto_capabilities to 0 */
- hba->crypto_capabilities.reg_val = 0;
+ /* Indicate that init failed by clearing UFSHCD_CAP_CRYPTO */
+ hba->caps &= ~UFSHCD_CAP_CRYPTO;
return err;
}
-EXPORT_SYMBOL_GPL(ufshcd_hba_init_crypto_spec);
+EXPORT_SYMBOL(ufshcd_hba_init_crypto_spec);
void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
struct request_queue *q)
{
- if (!ufshcd_hba_is_crypto_supported(hba) || !q)
- return;
-
- q->ksm = hba->ksm;
+ if (hba->caps & UFSHCD_CAP_CRYPTO)
+ blk_ksm_register(&hba->ksm, q);
}
-EXPORT_SYMBOL_GPL(ufshcd_crypto_setup_rq_keyslot_manager_spec);
+EXPORT_SYMBOL(ufshcd_crypto_setup_rq_keyslot_manager_spec);
-void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
- struct request_queue *q)
+void ufshcd_crypto_destroy_keyslot_manager_spec(struct ufs_hba *hba)
{
- keyslot_manager_destroy(hba->ksm);
+ blk_ksm_destroy(&hba->ksm);
}
-EXPORT_SYMBOL_GPL(ufshcd_crypto_destroy_rq_keyslot_manager_spec);
-
-int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
- struct scsi_cmnd *cmd,
- struct ufshcd_lrb *lrbp)
-{
- struct bio_crypt_ctx *bc;
-
- if (!bio_crypt_should_process(cmd->request)) {
- lrbp->crypto_enable = false;
- return 0;
- }
- bc = cmd->request->bio->bi_crypt_context;
-
- if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
- /*
- * Upper layer asked us to do inline encryption
- * but that isn't enabled, so we fail this request.
- */
- return -EINVAL;
- }
- if (!ufshcd_keyslot_valid(hba, bc->bc_keyslot))
- return -EINVAL;
-
- lrbp->crypto_enable = true;
- lrbp->crypto_key_slot = bc->bc_keyslot;
- lrbp->data_unit_num = bc->bc_dun[0];
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(ufshcd_prepare_lrbp_crypto_spec);
+EXPORT_SYMBOL(ufshcd_crypto_destroy_keyslot_manager_spec);
/* Crypto Variant Ops Support */
-void ufshcd_crypto_enable(struct ufs_hba *hba)
+bool ufshcd_crypto_enable(struct ufs_hba *hba)
{
if (hba->crypto_vops && hba->crypto_vops->enable)
return hba->crypto_vops->enable(hba);
@@ -414,14 +257,6 @@ void ufshcd_crypto_enable(struct ufs_hba *hba)
return ufshcd_crypto_enable_spec(hba);
}
-void ufshcd_crypto_disable(struct ufs_hba *hba)
-{
- if (hba->crypto_vops && hba->crypto_vops->disable)
- return hba->crypto_vops->disable(hba);
-
- return ufshcd_crypto_disable_spec(hba);
-}
-
int ufshcd_hba_init_crypto(struct ufs_hba *hba)
{
if (hba->crypto_vops && hba->crypto_vops->hba_init_crypto)
@@ -434,29 +269,34 @@ int ufshcd_hba_init_crypto(struct ufs_hba *hba)
void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
struct request_queue *q)
{
- if (hba->crypto_vops && hba->crypto_vops->setup_rq_keyslot_manager)
- return hba->crypto_vops->setup_rq_keyslot_manager(hba, q);
+ if (hba->crypto_vops && hba->crypto_vops->setup_rq_keyslot_manager) {
+ hba->crypto_vops->setup_rq_keyslot_manager(hba, q);
+ return;
+ }
- return ufshcd_crypto_setup_rq_keyslot_manager_spec(hba, q);
+ ufshcd_crypto_setup_rq_keyslot_manager_spec(hba, q);
}
-void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
- struct request_queue *q)
+void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba)
{
- if (hba->crypto_vops && hba->crypto_vops->destroy_rq_keyslot_manager)
- return hba->crypto_vops->destroy_rq_keyslot_manager(hba, q);
+ if (hba->crypto_vops && hba->crypto_vops->destroy_keyslot_manager) {
+ hba->crypto_vops->destroy_keyslot_manager(hba);
+ return;
+ }
- return ufshcd_crypto_destroy_rq_keyslot_manager_spec(hba, q);
+ ufshcd_crypto_destroy_keyslot_manager_spec(hba);
}
-int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
- struct scsi_cmnd *cmd,
- struct ufshcd_lrb *lrbp)
+void ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
{
- if (hba->crypto_vops && hba->crypto_vops->prepare_lrbp_crypto)
- return hba->crypto_vops->prepare_lrbp_crypto(hba, cmd, lrbp);
+ if (hba->crypto_vops && hba->crypto_vops->prepare_lrbp_crypto) {
+ hba->crypto_vops->prepare_lrbp_crypto(hba, cmd, lrbp);
+ return;
+ }
- return ufshcd_prepare_lrbp_crypto_spec(hba, cmd, lrbp);
+ ufshcd_prepare_lrbp_crypto_spec(hba, cmd, lrbp);
}
int ufshcd_map_sg_crypto(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
index f223a06..7fbc459 100644
--- a/drivers/scsi/ufs/ufshcd-crypto.h
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -7,78 +7,47 @@
#define _UFSHCD_CRYPTO_H
#ifdef CONFIG_SCSI_UFS_CRYPTO
-#include <linux/keyslot-manager.h>
#include "ufshcd.h"
#include "ufshci.h"
-static inline int ufshcd_num_keyslots(struct ufs_hba *hba)
+static inline void ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp)
{
- return hba->crypto_capabilities.config_count + 1;
+ struct request *rq = cmd->request;
+
+ if (rq->crypt_keyslot) {
+ lrbp->crypto_key_slot = blk_ksm_get_slot_idx(rq->crypt_keyslot);
+ lrbp->data_unit_num = rq->crypt_ctx->bc_dun[0];
+ } else {
+ lrbp->crypto_key_slot = -1;
+ }
}
-static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
-{
- /*
- * The actual number of configurations supported is (CFGC+1), so slot
- * numbers range from 0 to config_count inclusive.
- */
- return slot < ufshcd_num_keyslots(hba);
-}
+bool ufshcd_crypto_enable_spec(struct ufs_hba *hba);
-static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
-{
- return hba->crypto_capabilities.reg_val != 0;
-}
-
-static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
-{
- return hba->caps & UFSHCD_CAP_CRYPTO;
-}
-
-/* Functions implementing UFSHCI v2.1 specification behaviour */
-int ufshcd_crypto_cap_find(struct ufs_hba *hba,
- enum blk_crypto_mode_num crypto_mode,
- unsigned int data_unit_size);
-
-int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
- struct scsi_cmnd *cmd,
- struct ufshcd_lrb *lrbp);
-
-void ufshcd_crypto_enable_spec(struct ufs_hba *hba);
-
-void ufshcd_crypto_disable_spec(struct ufs_hba *hba);
-
-struct keyslot_mgmt_ll_ops;
+struct blk_ksm_ll_ops;
int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
- const struct keyslot_mgmt_ll_ops *ksm_ops);
+ const struct blk_ksm_ll_ops *ksm_ops);
void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
struct request_queue *q);
-void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
- struct request_queue *q);
-
-static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
-{
- return lrbp->crypto_enable;
-}
+void ufshcd_crypto_destroy_keyslot_manager_spec(struct ufs_hba *hba);
/* Crypto Variant Ops Support */
-void ufshcd_crypto_enable(struct ufs_hba *hba);
-
-void ufshcd_crypto_disable(struct ufs_hba *hba);
+bool ufshcd_crypto_enable(struct ufs_hba *hba);
int ufshcd_hba_init_crypto(struct ufs_hba *hba);
void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
struct request_queue *q);
-void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
- struct request_queue *q);
+void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba);
-int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
- struct scsi_cmnd *cmd,
- struct ufshcd_lrb *lrbp);
+void ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
int ufshcd_map_sg_crypto(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
@@ -97,26 +66,15 @@ void ufshcd_crypto_set_vops(struct ufs_hba *hba,
#else /* CONFIG_SCSI_UFS_CRYPTO */
-static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
- unsigned int slot)
+static inline void ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp) { }
+
+static inline bool ufshcd_crypto_enable(struct ufs_hba *hba)
{
return false;
}
-static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
-{
- return false;
-}
-
-static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
-{
- return false;
-}
-
-static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { }
-
-static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { }
-
static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
{
return 0;
@@ -125,15 +83,8 @@ static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
struct request_queue *q) { }
-static inline void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
- struct request_queue *q) { }
-
-static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
- struct scsi_cmnd *cmd,
- struct ufshcd_lrb *lrbp)
-{
- return 0;
-}
+static inline void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba)
+{ }
static inline int ufshcd_map_sg_crypto(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp)
@@ -141,11 +92,6 @@ static inline int ufshcd_map_sg_crypto(struct ufs_hba *hba,
return 0;
}
-static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
-{
- return false;
-}
-
static inline int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
struct scsi_cmnd *cmd,
struct ufshcd_lrb *lrbp)
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 1f1b519..b90aa81 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -817,10 +817,8 @@ static inline void ufshcd_hba_start(struct ufs_hba *hba)
{
u32 val = CONTROLLER_ENABLE;
- if (ufshcd_hba_is_crypto_supported(hba)) {
- ufshcd_crypto_enable(hba);
+ if (ufshcd_crypto_enable(hba))
val |= CRYPTO_GENERAL_ENABLE;
- }
ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
}
@@ -2231,6 +2229,8 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr;
u32 data_direction;
u32 dword_0;
+ u32 dword_1 = 0;
+ u32 dword_3 = 0;
if (cmd_dir == DMA_FROM_DEVICE) {
data_direction = UTP_DEVICE_TO_HOST;
@@ -2249,23 +2249,17 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
dword_0 |= UTP_REQ_DESC_INT_CMD;
/* Transfer request descriptor header fields */
- if (ufshcd_lrbp_crypto_enabled(lrbp)) {
-#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+ if (lrbp->crypto_key_slot >= 0) {
dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
dword_0 |= lrbp->crypto_key_slot;
- req_desc->header.dword_1 =
- cpu_to_le32(lower_32_bits(lrbp->data_unit_num));
- req_desc->header.dword_3 =
- cpu_to_le32(upper_32_bits(lrbp->data_unit_num));
-#endif /* CONFIG_SCSI_UFS_CRYPTO */
- } else {
- /* dword_1 and dword_3 are reserved, hence they are set to 0 */
- req_desc->header.dword_1 = 0;
- req_desc->header.dword_3 = 0;
+ dword_1 = lower_32_bits(lrbp->data_unit_num);
+ dword_3 = upper_32_bits(lrbp->data_unit_num);
}
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
req_desc->header.dword_0 = cpu_to_le32(dword_0);
-
+ req_desc->header.dword_1 = cpu_to_le32(dword_1);
/*
* assigning invalid value for command status. Controller
* updates OCS on command completion, with the command
@@ -2273,6 +2267,7 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
*/
req_desc->header.dword_2 =
cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
+ req_desc->header.dword_3 = cpu_to_le32(dword_3);
req_desc->prd_table_length = 0;
}
@@ -2529,12 +2524,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
- err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
- if (err) {
- lrbp->cmd = NULL;
- ufshcd_release(hba);
- goto out;
- }
+ ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
+
lrbp->req_abort_skip = false;
ufshcd_comp_scsi_upiu(hba, lrbp);
@@ -2568,8 +2559,8 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
lrbp->task_tag = tag;
lrbp->lun = 0; /* device management cmd is not specific to any LUN */
lrbp->intr_cmd = true; /* No interrupt aggregation */
-#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
- lrbp->crypto_enable = false; /* No crypto operations */
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+ lrbp->crypto_key_slot = -1; /* No crypto operations */
#endif
hba->dev_cmd.type = cmd_type;
@@ -4273,8 +4264,6 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
{
int err;
- ufshcd_crypto_disable(hba);
-
ufshcd_writel(hba, CONTROLLER_DISABLE, REG_CONTROLLER_ENABLE);
err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
CONTROLLER_ENABLE, CONTROLLER_DISABLE,
@@ -4677,7 +4666,6 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
static void ufshcd_slave_destroy(struct scsi_device *sdev)
{
struct ufs_hba *hba;
- struct request_queue *q = sdev->request_queue;
hba = shost_priv(sdev->host);
/* Drop the reference as it won't be needed anymore */
@@ -4688,8 +4676,6 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
hba->sdev_ufs_device = NULL;
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
-
- ufshcd_crypto_destroy_rq_keyslot_manager(hba, q);
}
/**
@@ -5957,6 +5943,9 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
lrbp->task_tag = tag;
lrbp->lun = 0;
lrbp->intr_cmd = true;
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+ lrbp->crypto_key_slot = -1; /* No crypto operations */
+#endif
hba->dev_cmd.type = cmd_type;
switch (hba->ufs_version) {
@@ -8397,6 +8386,7 @@ EXPORT_SYMBOL_GPL(ufshcd_remove);
*/
void ufshcd_dealloc_host(struct ufs_hba *hba)
{
+ ufshcd_crypto_destroy_keyslot_manager(hba);
scsi_host_put(hba->host);
}
EXPORT_SYMBOL_GPL(ufshcd_dealloc_host);
@@ -8612,7 +8602,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
err = ufshcd_hba_init_crypto(hba);
if (err) {
dev_err(hba->dev, "crypto setup failed\n");
- goto out_remove_scsi_host;
+ goto free_tmf_queue;
}
/* Host controller enable */
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index b98c53d..58fff1d 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -57,6 +57,7 @@
#include <linux/regulator/consumer.h>
#include <linux/bitfield.h>
#include <linux/devfreq.h>
+#include <linux/keyslot-manager.h>
#include "unipro.h"
#include <asm/irq.h>
@@ -182,8 +183,7 @@ struct ufs_pm_lvl_states {
* @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
* @issue_time_stamp: time stamp for debug purposes
* @compl_time_stamp: time stamp for statistics
- * @crypto_enable: whether or not the request needs inline crypto operations
- * @crypto_key_slot: the key slot to use for inline crypto
+ * @crypto_key_slot: the key slot to use for inline crypto (-1 if none)
* @data_unit_num: the data unit number for the first block for inline crypto
* @req_abort_skip: skip request abort task flag
*/
@@ -209,11 +209,10 @@ struct ufshcd_lrb {
bool intr_cmd;
ktime_t issue_time_stamp;
ktime_t compl_time_stamp;
-#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
- bool crypto_enable;
- u8 crypto_key_slot;
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+ int crypto_key_slot;
u64 data_unit_num;
-#endif /* CONFIG_SCSI_UFS_CRYPTO */
+#endif
bool req_abort_skip;
};
@@ -359,22 +358,20 @@ struct ufs_hba_variant_ops {
const union ufs_crypto_cfg_entry *cfg, int slot);
};
-struct keyslot_mgmt_ll_ops;
+struct blk_ksm_ll_ops;
struct ufs_hba_crypto_variant_ops {
void (*setup_rq_keyslot_manager)(struct ufs_hba *hba,
struct request_queue *q);
- void (*destroy_rq_keyslot_manager)(struct ufs_hba *hba,
- struct request_queue *q);
+ void (*destroy_keyslot_manager)(struct ufs_hba *hba);
int (*hba_init_crypto)(struct ufs_hba *hba,
- const struct keyslot_mgmt_ll_ops *ksm_ops);
- void (*enable)(struct ufs_hba *hba);
- void (*disable)(struct ufs_hba *hba);
+ const struct blk_ksm_ll_ops *ksm_ops);
+ bool (*enable)(struct ufs_hba *hba);
int (*suspend)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
int (*resume)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
int (*debug)(struct ufs_hba *hba);
- int (*prepare_lrbp_crypto)(struct ufs_hba *hba,
- struct scsi_cmnd *cmd,
- struct ufshcd_lrb *lrbp);
+ void (*prepare_lrbp_crypto)(struct ufs_hba *hba,
+ struct scsi_cmnd *cmd,
+ struct ufshcd_lrb *lrbp);
int (*map_sg_crypto)(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
int (*complete_lrbp_crypto)(struct ufs_hba *hba,
struct scsi_cmnd *cmd,
@@ -560,12 +557,6 @@ enum ufshcd_quirks {
* OCS FATAL ERROR with device error through sense data
*/
UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR = 1 << 6,
-
- /*
- * This quirk needs to be enabled if the host controller advertises
- * inline encryption support but it doesn't work correctly.
- */
- UFSHCD_QUIRK_BROKEN_CRYPTO = 1 << 7,
};
enum ufshcd_caps {
@@ -608,7 +599,7 @@ enum ufshcd_caps {
* This capability allows the host controller driver to use the
* inline crypto engine, if it is present
*/
- UFSHCD_CAP_CRYPTO = 1 << 7,
+ UFSHCD_CAP_CRYPTO = 1 << 8,
};
/**
@@ -790,12 +781,11 @@ struct ufs_hba {
struct request_queue *bsg_queue;
#ifdef CONFIG_SCSI_UFS_CRYPTO
- /* crypto */
union ufs_crypto_capabilities crypto_capabilities;
union ufs_crypto_cap_entry *crypto_cap_array;
u32 crypto_cfg_register;
- struct keyslot_manager *ksm;
-#endif /* CONFIG_SCSI_UFS_CRYPTO */
+ struct blk_keyslot_manager ksm;
+#endif
};
/* Returns true if clocks can be gated. Otherwise false */
diff --git a/fs/buffer.c b/fs/buffer.c
index 58021b4..dc5e05b 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -320,9 +320,8 @@ static void decrypt_bh(struct work_struct *work)
static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate)
{
/* Decrypt if needed */
- if (uptodate && IS_ENABLED(CONFIG_FS_ENCRYPTION) &&
- IS_ENCRYPTED(bh->b_page->mapping->host) &&
- S_ISREG(bh->b_page->mapping->host->i_mode)) {
+ if (uptodate &&
+ fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) {
struct decrypt_bh_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
if (ctx) {
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index d2d6e1a..1ea9369 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -41,50 +41,47 @@ void fscrypt_decrypt_bio(struct bio *bio)
}
EXPORT_SYMBOL(fscrypt_decrypt_bio);
-static int fscrypt_zeroout_range_inlinecrypt(const struct inode *inode,
- pgoff_t lblk,
- sector_t pblk, unsigned int len)
+static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
+ pgoff_t lblk, sector_t pblk,
+ unsigned int len)
{
const unsigned int blockbits = inode->i_blkbits;
- const unsigned int blocks_per_page_bits = PAGE_SHIFT - blockbits;
- const unsigned int blocks_per_page = 1 << blocks_per_page_bits;
- unsigned int i;
+ const unsigned int blocks_per_page = 1 << (PAGE_SHIFT - blockbits);
struct bio *bio;
- int ret, err;
+ int ret, err = 0;
+ int num_pages = 0;
/* This always succeeds since __GFP_DIRECT_RECLAIM is set. */
bio = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
- do {
- bio_set_dev(bio, inode->i_sb->s_bdev);
- bio->bi_iter.bi_sector = pblk << (blockbits - 9);
- bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
- fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS);
+ while (len) {
+ unsigned int blocks_this_page = min(len, blocks_per_page);
+ unsigned int bytes_this_page = blocks_this_page << blockbits;
- i = 0;
- do {
- unsigned int blocks_this_page =
- min(len, blocks_per_page);
- unsigned int bytes_this_page =
- blocks_this_page << blockbits;
-
- ret = bio_add_page(bio, ZERO_PAGE(0),
- bytes_this_page, 0);
- if (WARN_ON(ret != bytes_this_page)) {
- err = -EIO;
- goto out;
- }
- lblk += blocks_this_page;
- pblk += blocks_this_page;
- len -= blocks_this_page;
- } while (++i != BIO_MAX_PAGES && len != 0);
-
- err = submit_bio_wait(bio);
- if (err)
+ if (num_pages == 0) {
+ fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS);
+ bio_set_dev(bio, inode->i_sb->s_bdev);
+ bio->bi_iter.bi_sector =
+ pblk << (blockbits - SECTOR_SHIFT);
+ bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
+ }
+ ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0);
+ if (WARN_ON(ret != bytes_this_page)) {
+ err = -EIO;
goto out;
- bio_reset(bio);
- } while (len != 0);
- err = 0;
+ }
+ num_pages++;
+ len -= blocks_this_page;
+ lblk += blocks_this_page;
+ pblk += blocks_this_page;
+ if (num_pages == BIO_MAX_PAGES || !len) {
+ err = submit_bio_wait(bio);
+ if (err)
+ goto out;
+ bio_reset(bio);
+ num_pages = 0;
+ }
+ }
out:
bio_put(bio);
return err;
@@ -125,8 +122,8 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
return 0;
if (fscrypt_inode_uses_inline_crypto(inode))
- return fscrypt_zeroout_range_inlinecrypt(inode, lblk, pblk,
- len);
+ return fscrypt_zeroout_range_inline_crypt(inode, lblk, pblk,
+ len);
BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_PAGES);
nr_pages = min_t(unsigned int, ARRAY_SIZE(pages),
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index d2ceda7..b88d976 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -108,7 +108,7 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist dst, src;
struct fscrypt_info *ci = inode->i_crypt_info;
- struct crypto_skcipher *tfm = ci->ci_key.tfm;
+ struct crypto_skcipher *tfm = ci->ci_enc_key.tfm;
int res = 0;
if (WARN_ON_ONCE(len <= 0))
diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
index 2030160..0ba25a9 100644
--- a/fs/crypto/fname.c
+++ b/fs/crypto/fname.c
@@ -115,7 +115,7 @@ int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname,
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
const struct fscrypt_info *ci = inode->i_crypt_info;
- struct crypto_skcipher *tfm = ci->ci_key.tfm;
+ struct crypto_skcipher *tfm = ci->ci_enc_key.tfm;
union fscrypt_iv iv;
struct scatterlist sg;
int res;
@@ -171,7 +171,7 @@ static int fname_decrypt(const struct inode *inode,
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist src_sg, dst_sg;
const struct fscrypt_info *ci = inode->i_crypt_info;
- struct crypto_skcipher *tfm = ci->ci_key.tfm;
+ struct crypto_skcipher *tfm = ci->ci_enc_key.tfm;
union fscrypt_iv iv;
int res;
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index cca57e1..997cbaf 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -14,7 +14,7 @@
#include <linux/fscrypt.h>
#include <linux/siphash.h>
#include <crypto/hash.h>
-#include <linux/bio-crypt-ctx.h>
+#include <linux/blk-crypto.h>
#define CONST_STRLEN(str) (sizeof(str) - 1)
@@ -192,9 +192,12 @@ struct fscrypt_prepared_key {
struct fscrypt_info {
/* The key in a form prepared for actual encryption/decryption */
- struct fscrypt_prepared_key ci_key;
+ struct fscrypt_prepared_key ci_enc_key;
- /* True if the key should be freed when this fscrypt_info is freed */
+ /*
+ * True if the ci_enc_key should be freed when this fscrypt_info is
+ * freed
+ */
bool ci_owns_key;
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
@@ -229,7 +232,7 @@ struct fscrypt_info {
/*
* If non-NULL, then encryption is done using the master key directly
- * and ci_key will equal ci_direct_key->dk_key.
+ * and ci_enc_key will equal ci_direct_key->dk_key.
*/
struct fscrypt_direct_key *ci_direct_key;
@@ -328,8 +331,8 @@ void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf);
/* inline_crypt.c */
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
-extern int fscrypt_select_encryption_impl(struct fscrypt_info *ci,
- bool is_hw_wrapped_key);
+int fscrypt_select_encryption_impl(struct fscrypt_info *ci,
+ bool is_hw_wrapped_key);
static inline bool
fscrypt_using_inline_encryption(const struct fscrypt_info *ci)
@@ -337,15 +340,13 @@ fscrypt_using_inline_encryption(const struct fscrypt_info *ci)
return ci->ci_inlinecrypt;
}
-extern int fscrypt_prepare_inline_crypt_key(
- struct fscrypt_prepared_key *prep_key,
- const u8 *raw_key,
- unsigned int raw_key_size,
- bool is_hw_wrapped,
- const struct fscrypt_info *ci);
+int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+ const u8 *raw_key,
+ unsigned int raw_key_size,
+ bool is_hw_wrapped,
+ const struct fscrypt_info *ci);
-extern void fscrypt_destroy_inline_crypt_key(
- struct fscrypt_prepared_key *prep_key);
+void fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key);
extern int fscrypt_derive_raw_secret(struct super_block *sb,
const u8 *wrapped_key,
diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
index 9b3a0c1..0b80992 100644
--- a/fs/crypto/inline_crypt.c
+++ b/fs/crypto/inline_crypt.c
@@ -16,6 +16,7 @@
#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/keyslot-manager.h>
+#include <linux/sched/mm.h>
#include <linux/uio.h>
#include "fscrypt_private.h"
@@ -69,51 +70,39 @@ int fscrypt_select_encryption_impl(struct fscrypt_info *ci,
{
const struct inode *inode = ci->ci_inode;
struct super_block *sb = inode->i_sb;
- enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode;
- unsigned int dun_bytes;
- struct request_queue **devs;
+ struct blk_crypto_config crypto_cfg;
int num_devs;
+ struct request_queue **devs;
int i;
/* The file must need contents encryption, not filenames encryption */
- if (!S_ISREG(inode->i_mode))
+ if (!fscrypt_needs_contents_encryption(inode))
return 0;
- /* blk-crypto must implement the needed encryption algorithm */
- if (crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+ /* The crypto mode must be valid */
+ if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
return 0;
/* The filesystem must be mounted with -o inlinecrypt */
- if (!sb->s_cop->inline_crypt_enabled ||
- !sb->s_cop->inline_crypt_enabled(sb))
+ if (!(sb->s_flags & SB_INLINECRYPT))
return 0;
/*
- * The needed encryption settings must be supported either by
- * blk-crypto-fallback, or by hardware on all the filesystem's devices.
+ * blk-crypto must support the crypto configuration we'll use for the
+ * inode on all devices in the sb
*/
-
- if (IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) &&
- !is_hw_wrapped_key) {
- ci->ci_inlinecrypt = true;
- return 0;
- }
-
+ crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode;
+ crypto_cfg.data_unit_size = sb->s_blocksize;
+ crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci);
+ crypto_cfg.is_hw_wrapped = is_hw_wrapped_key;
num_devs = fscrypt_get_num_devices(sb);
devs = kmalloc_array(num_devs, sizeof(*devs), GFP_NOFS);
if (!devs)
return -ENOMEM;
-
fscrypt_get_devices(sb, num_devs, devs);
- dun_bytes = fscrypt_get_dun_bytes(ci);
-
for (i = 0; i < num_devs; i++) {
- if (!keyslot_manager_crypto_mode_supported(devs[i]->ksm,
- crypto_mode,
- dun_bytes,
- sb->s_blocksize,
- is_hw_wrapped_key))
+ if (!blk_crypto_config_supported(devs[i], &crypto_cfg))
goto out_free_devs;
}
@@ -132,16 +121,12 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
const struct inode *inode = ci->ci_inode;
struct super_block *sb = inode->i_sb;
enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode;
- unsigned int dun_bytes;
- int num_devs;
+ int num_devs = fscrypt_get_num_devices(sb);
int queue_refs = 0;
struct fscrypt_blk_crypto_key *blk_key;
int err;
int i;
-
- num_devs = fscrypt_get_num_devices(sb);
- if (WARN_ON(num_devs < 1))
- return -EINVAL;
+ unsigned int flags;
blk_key = kzalloc(struct_size(blk_key, devs, num_devs), GFP_NOFS);
if (!blk_key)
@@ -150,14 +135,12 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
blk_key->num_devs = num_devs;
fscrypt_get_devices(sb, num_devs, blk_key->devs);
- dun_bytes = fscrypt_get_dun_bytes(ci);
-
BUILD_BUG_ON(FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE >
BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE);
err = blk_crypto_init_key(&blk_key->base, raw_key, raw_key_size,
- is_hw_wrapped, crypto_mode, dun_bytes,
- sb->s_blocksize);
+ is_hw_wrapped, crypto_mode,
+ fscrypt_get_dun_bytes(ci), sb->s_blocksize);
if (err) {
fscrypt_err(inode, "error %d initializing blk-crypto key", err);
goto fail;
@@ -178,10 +161,10 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
}
queue_refs++;
- err = blk_crypto_start_using_mode(crypto_mode, dun_bytes,
- sb->s_blocksize,
- is_hw_wrapped,
- blk_key->devs[i]);
+ flags = memalloc_nofs_save();
+ err = blk_crypto_start_using_key(&blk_key->base,
+ blk_key->devs[i]);
+ memalloc_nofs_restore(flags);
if (err) {
fscrypt_err(inode,
"error %d starting to use blk-crypto", err);
@@ -227,42 +210,15 @@ int fscrypt_derive_raw_secret(struct super_block *sb,
if (!q->ksm)
return -EOPNOTSUPP;
- return keyslot_manager_derive_raw_secret(q->ksm,
- wrapped_key, wrapped_key_size,
- raw_secret, raw_secret_size);
+ return blk_ksm_derive_raw_secret(q->ksm, wrapped_key, wrapped_key_size,
+ raw_secret, raw_secret_size);
}
-/**
- * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline
- * encryption
- * @inode: an inode
- *
- * Return: true if the inode requires file contents encryption and if the
- * encryption should be done in the block layer via blk-crypto rather
- * than in the filesystem layer.
- */
-bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode)
{
- return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
- inode->i_crypt_info->ci_inlinecrypt;
+ return inode->i_crypt_info->ci_inlinecrypt;
}
-EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
-
-/**
- * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer
- * encryption
- * @inode: an inode
- *
- * Return: true if the inode requires file contents encryption and if the
- * encryption should be done in the filesystem layer rather than in the
- * block layer via blk-crypto.
- */
-bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
-{
- return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
- !inode->i_crypt_info->ci_inlinecrypt;
-}
-EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);
+EXPORT_SYMBOL_GPL(__fscrypt_inode_uses_inline_crypto);
static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
@@ -279,7 +235,7 @@ static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
}
/**
- * fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption
+ * fscrypt_set_bio_crypt_ctx() - prepare a file contents bio for inline crypto
* @bio: a bio which will eventually be submitted to the file
* @inode: the file's inode
* @first_lblk: the first file logical block number in the I/O
@@ -309,7 +265,7 @@ void fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
return;
fscrypt_generate_dun(ci, first_lblk, dun);
- bio_crypt_set_ctx(bio, &ci->ci_key.blk_key->base, dun, gfp_mask);
+ bio_crypt_set_ctx(bio, &ci->ci_enc_key.blk_key->base, dun, gfp_mask);
}
EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx);
@@ -338,8 +294,8 @@ static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh,
}
/**
- * fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline
- * encryption
+ * fscrypt_set_bio_crypt_ctx_bh() - prepare a file contents bio for inline
+ * crypto
* @bio: a bio which will eventually be submitted to the file
* @first_bh: the first buffer_head for which I/O will be submitted
* @gfp_mask: memory allocation flags
@@ -348,8 +304,8 @@ static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh,
* of an inode and block number directly.
*/
void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
- const struct buffer_head *first_bh,
- gfp_t gfp_mask)
+ const struct buffer_head *first_bh,
+ gfp_t gfp_mask)
{
const struct inode *inode;
u64 first_lblk;
@@ -360,7 +316,7 @@ void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh);
/**
- * fscrypt_mergeable_bio - test whether data can be added to a bio
+ * fscrypt_mergeable_bio() - test whether data can be added to a bio
* @bio: the bio being built up
* @inode: the inode for the next part of the I/O
* @next_lblk: the next file logical block number in the I/O
@@ -398,7 +354,7 @@ bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
* uses the same pointer. I.e., there's currently no need to support
* merging requests where the keys are the same but the pointers differ.
*/
- if (bc->bc_key != &inode->i_crypt_info->ci_key.blk_key->base)
+ if (bc->bc_key != &inode->i_crypt_info->ci_enc_key.blk_key->base)
return false;
fscrypt_generate_dun(inode->i_crypt_info, next_lblk, next_dun);
@@ -407,7 +363,7 @@ bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio);
/**
- * fscrypt_mergeable_bio_bh - test whether data can be added to a bio
+ * fscrypt_mergeable_bio_bh() - test whether data can be added to a bio
* @bio: the bio being built up
* @next_bh: the next buffer_head for which I/O will be submitted
*
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
index 2ce4d48..f0f523f 100644
--- a/fs/crypto/keysetup.c
+++ b/fs/crypto/keysetup.c
@@ -152,7 +152,7 @@ void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key)
int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key)
{
ci->ci_owns_key = true;
- return fscrypt_prepare_key(&ci->ci_key, raw_key, ci->ci_mode->keysize,
+ return fscrypt_prepare_key(&ci->ci_enc_key, raw_key, ci->ci_mode->keysize,
false /*is_hw_wrapped*/, ci);
}
@@ -176,7 +176,7 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci,
prep_key = &keys[mode_num];
if (fscrypt_is_key_prepared(prep_key, ci)) {
- ci->ci_key = *prep_key;
+ ci->ci_enc_key = *prep_key;
return 0;
}
@@ -228,8 +228,7 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci,
goto out_unlock;
}
done_unlock:
- ci->ci_key = *prep_key;
-
+ ci->ci_enc_key = *prep_key;
err = 0;
out_unlock:
mutex_unlock(&fscrypt_mode_key_setup_mutex);
@@ -471,7 +470,7 @@ static void put_crypt_info(struct fscrypt_info *ci)
if (ci->ci_direct_key)
fscrypt_put_direct_key(ci->ci_direct_key);
else if (ci->ci_owns_key)
- fscrypt_destroy_prepared_key(&ci->ci_key);
+ fscrypt_destroy_prepared_key(&ci->ci_enc_key);
key = ci->ci_master_key;
if (key) {
diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
index a39a5fb..6bcf1cf 100644
--- a/fs/crypto/keysetup_v1.c
+++ b/fs/crypto/keysetup_v1.c
@@ -258,7 +258,7 @@ static int setup_v1_file_key_direct(struct fscrypt_info *ci,
if (IS_ERR(dk))
return PTR_ERR(dk);
ci->ci_direct_key = dk;
- ci->ci_key = dk->dk_key;
+ ci->ci_enc_key = dk->dk_key;
return 0;
}
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 30a7bd0..31a5a71 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1150,7 +1150,6 @@ struct ext4_inode_info {
#define EXT4_MOUNT_JOURNAL_CHECKSUM 0x800000 /* Journal checksums */
#define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT 0x1000000 /* Journal Async Commit */
#define EXT4_MOUNT_WARN_ON_ERROR 0x2000000 /* Trigger WARN_ON on error */
-#define EXT4_MOUNT_INLINECRYPT 0x4000000 /* Inline encryption support */
#define EXT4_MOUNT_DELALLOC 0x8000000 /* Delalloc support */
#define EXT4_MOUNT_DATA_ERR_ABORT 0x10000000 /* Abort on file data write */
#define EXT4_MOUNT_BLOCK_VALIDITY 0x20000000 /* Block validity checking */
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index 82ae6e4..bb1c3b1 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -412,9 +412,9 @@ int ext4_mpage_readpages(struct inode *inode,
*/
bio = bio_alloc(GFP_KERNEL,
min_t(int, nr_pages, BIO_MAX_PAGES));
- ext4_set_bio_post_read_ctx(bio, inode, page->index);
fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
GFP_KERNEL);
+ ext4_set_bio_post_read_ctx(bio, inode, page->index);
bio_set_dev(bio, bdev);
bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
bio->bi_end_io = mpage_end_io;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 64de000..422b1f7 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1408,11 +1408,6 @@ static void ext4_get_ino_and_lblk_bits(struct super_block *sb,
*lblk_bits_ret = 8 * sizeof(ext4_lblk_t);
}
-static bool ext4_inline_crypt_enabled(struct super_block *sb)
-{
- return test_opt(sb, INLINECRYPT);
-}
-
static const struct fscrypt_operations ext4_cryptops = {
.key_prefix = "ext4:",
.get_context = ext4_get_context,
@@ -1422,7 +1417,6 @@ static const struct fscrypt_operations ext4_cryptops = {
.max_namelen = EXT4_NAME_LEN,
.has_stable_inodes = ext4_has_stable_inodes,
.get_ino_and_lblk_bits = ext4_get_ino_and_lblk_bits,
- .inline_crypt_enabled = ext4_inline_crypt_enabled,
};
#endif
@@ -1828,11 +1822,6 @@ static const struct mount_opts {
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
{Opt_test_dummy_encryption, 0, MOPT_STRING},
-#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
- {Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET},
-#else
- {Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT},
-#endif
{Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET},
{Opt_err, 0, 0}
};
@@ -1951,6 +1940,13 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
case Opt_nolazytime:
sb->s_flags &= ~SB_LAZYTIME;
return 1;
+ case Opt_inlinecrypt:
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+ sb->s_flags |= SB_INLINECRYPT;
+#else
+ ext4_msg(sb, KERN_ERR, "inline encryption not supported");
+#endif
+ return 1;
}
for (m = ext4_mount_opts; m->token != Opt_err; m++)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index df7b2d1..a19c093 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -975,7 +975,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
.submitted = false,
.io_type = io_type,
.io_wbc = wbc,
- .encrypted = f2fs_encrypted_file(cc->inode),
+ .encrypted = fscrypt_inode_uses_fs_layer_crypto(cc->inode),
};
struct dnode_of_data dn;
struct node_info ni;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index e8d2a2a..f76b749 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -14,6 +14,7 @@
#include <linux/pagevec.h>
#include <linux/blkdev.h>
#include <linux/bio.h>
+#include <linux/blk-crypto.h>
#include <linux/swap.h>
#include <linux/prefetch.h>
#include <linux/uio.h>
@@ -766,9 +767,10 @@ static void del_bio_entry(struct bio_entry *be)
kmem_cache_free(bio_entry_slab, be);
}
-static int add_ipu_page(struct f2fs_sb_info *sbi, struct bio **bio,
+static int add_ipu_page(struct f2fs_io_info *fio, struct bio **bio,
struct page *page)
{
+ struct f2fs_sb_info *sbi = fio->sbi;
enum temp_type temp;
bool found = false;
int ret = -EAGAIN;
@@ -785,13 +787,18 @@ static int add_ipu_page(struct f2fs_sb_info *sbi, struct bio **bio,
found = true;
- if (bio_add_page(*bio, page, PAGE_SIZE, 0) ==
- PAGE_SIZE) {
+ if (page_is_mergeable(sbi, *bio, *fio->last_block,
+ fio->new_blkaddr) &&
+ f2fs_crypt_mergeable_bio(*bio,
+ fio->page->mapping->host,
+ fio->page->index, fio) &&
+ bio_add_page(*bio, page, PAGE_SIZE, 0) ==
+ PAGE_SIZE) {
ret = 0;
break;
}
- /* bio is full */
+ /* page can't be merged into bio; submit the bio */
del_bio_entry(be);
__submit_bio(sbi, *bio, DATA);
break;
@@ -876,22 +883,16 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
trace_f2fs_submit_page_bio(page, fio);
f2fs_trace_ios(fio, 0);
- if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block,
- fio->new_blkaddr) ||
- !f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host,
- fio->page->index, fio)))
- f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
alloc_new:
if (!bio) {
bio = __bio_alloc(fio, BIO_MAX_PAGES);
f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
- fio->page->index, fio,
- GFP_NOIO);
+ fio->page->index, fio, GFP_NOIO);
bio_set_op_attrs(bio, fio->op, fio->op_flags);
add_bio_entry(fio->sbi, bio, page, fio->temp);
} else {
- if (add_ipu_page(fio->sbi, &bio, page))
+ if (add_ipu_page(fio, &bio, page))
goto alloc_new;
}
@@ -960,8 +961,7 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
}
io->bio = __bio_alloc(fio, BIO_MAX_PAGES);
f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host,
- fio->page->index, fio,
- GFP_NOIO);
+ fio->page->index, fio, GFP_NOIO);
io->fio = *fio;
}
@@ -2167,8 +2167,9 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
blkaddr = data_blkaddr(dn.inode, dn.node_page,
dn.ofs_in_node + i + 1);
- if (bio && !page_is_mergeable(sbi, bio,
- *last_block_in_bio, blkaddr)) {
+ if (bio && (!page_is_mergeable(sbi, bio,
+ *last_block_in_bio, blkaddr) ||
+ !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
submit_and_realloc:
__submit_bio(sbi, bio, DATA);
bio = NULL;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 9a1224f..110890b 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -139,9 +139,6 @@ struct f2fs_mount_info {
int fs_mode; /* fs mode: LFS or ADAPTIVE */
int bggc_mode; /* bggc mode: off, on or sync */
struct fscrypt_dummy_context dummy_enc_ctx; /* test dummy encryption */
-#ifdef CONFIG_FS_ENCRYPTION
- bool inlinecrypt; /* inline encryption enabled */
-#endif
block_t unusable_cap; /* Amount of space allowed to be
* unusable when disabling checkpoint
*/
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index d4a92eb..f0da7be 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -831,7 +831,7 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
break;
case Opt_inlinecrypt:
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
- F2FS_OPTION(sbi).inlinecrypt = true;
+ sb->s_flags |= SB_INLINECRYPT;
#else
f2fs_info(sbi, "inline encryption not supported");
#endif
@@ -1593,11 +1593,6 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
fscrypt_show_test_dummy_encryption(seq, ',', sbi->sb);
-#ifdef CONFIG_FS_ENCRYPTION
- if (F2FS_OPTION(sbi).inlinecrypt)
- seq_puts(seq, ",inlinecrypt");
-#endif
-
if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT)
seq_printf(seq, ",alloc_mode=%s", "default");
else if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_REUSE)
@@ -1625,9 +1620,6 @@ static void default_options(struct f2fs_sb_info *sbi)
F2FS_OPTION(sbi).whint_mode = WHINT_MODE_OFF;
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
-#ifdef CONFIG_FS_ENCRYPTION
- F2FS_OPTION(sbi).inlinecrypt = false;
-#endif
F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4;
@@ -1635,6 +1627,8 @@ static void default_options(struct f2fs_sb_info *sbi)
F2FS_OPTION(sbi).compress_ext_cnt = 0;
F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
+ sbi->sb->s_flags &= ~SB_INLINECRYPT;
+
set_opt(sbi, INLINE_XATTR);
set_opt(sbi, INLINE_DATA);
set_opt(sbi, INLINE_DENTRY);
@@ -2480,11 +2474,6 @@ static void f2fs_get_ino_and_lblk_bits(struct super_block *sb,
*lblk_bits_ret = 8 * sizeof(block_t);
}
-static bool f2fs_inline_crypt_enabled(struct super_block *sb)
-{
- return F2FS_OPTION(F2FS_SB(sb)).inlinecrypt;
-}
-
static int f2fs_get_num_devices(struct super_block *sb)
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
@@ -2513,7 +2502,6 @@ static const struct fscrypt_operations f2fs_cryptops = {
.max_namelen = F2FS_NAME_LEN,
.has_stable_inodes = f2fs_has_stable_inodes,
.get_ino_and_lblk_bits = f2fs_get_ino_and_lblk_bits,
- .inline_crypt_enabled = f2fs_inline_crypt_enabled,
.get_num_devices = f2fs_get_num_devices,
.get_devices = f2fs_get_devices,
};
diff --git a/fs/proc_namespace.c b/fs/proc_namespace.c
index b6cda07..1574fe1 100644
--- a/fs/proc_namespace.c
+++ b/fs/proc_namespace.c
@@ -49,6 +49,7 @@ static int show_sb_opts(struct seq_file *m, struct super_block *sb)
{ SB_DIRSYNC, ",dirsync" },
{ SB_MANDLOCK, ",mand" },
{ SB_LAZYTIME, ",lazytime" },
+ { SB_INLINECRYPT, ",inlinecrypt" },
{ 0, NULL }
};
const struct proc_fs_info *fs_infop;
diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h
deleted file mode 100644
index 9df113f..0000000
--- a/include/linux/bio-crypt-ctx.h
+++ /dev/null
@@ -1,257 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright 2019 Google LLC
- */
-#ifndef __LINUX_BIO_CRYPT_CTX_H
-#define __LINUX_BIO_CRYPT_CTX_H
-
-enum blk_crypto_mode_num {
- BLK_ENCRYPTION_MODE_INVALID,
- BLK_ENCRYPTION_MODE_AES_256_XTS,
- BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
- BLK_ENCRYPTION_MODE_ADIANTUM,
- BLK_ENCRYPTION_MODE_MAX,
-};
-
-#ifdef CONFIG_BLOCK
-#include <linux/blk_types.h>
-
-#ifdef CONFIG_BLK_INLINE_ENCRYPTION
-
-#define BLK_CRYPTO_MAX_KEY_SIZE 64
-#define BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE 128
-
-/**
- * struct blk_crypto_key - an inline encryption key
- * @crypto_mode: encryption algorithm this key is for
- * @data_unit_size: the data unit size for all encryption/decryptions with this
- * key. This is the size in bytes of each individual plaintext and
- * ciphertext. This is always a power of 2. It might be e.g. the
- * filesystem block size or the disk sector size.
- * @data_unit_size_bits: log2 of data_unit_size
- * @size: size of this key in bytes (determined by @crypto_mode)
- * @hash: hash of this key, for keyslot manager use only
- * @is_hw_wrapped: @raw points to a wrapped key to be used by an inline
- * encryption hardware that accepts wrapped keys.
- * @raw: the raw bytes of this key. Only the first @size bytes are used.
- *
- * A blk_crypto_key is immutable once created, and many bios can reference it at
- * the same time. It must not be freed until all bios using it have completed.
- */
-struct blk_crypto_key {
- enum blk_crypto_mode_num crypto_mode;
- unsigned int data_unit_size;
- unsigned int data_unit_size_bits;
- unsigned int size;
-
- /*
- * Hack to avoid breaking KMI: pack both hash and dun_bytes into the
- * hash field...
- */
-#define BLK_CRYPTO_KEY_HASH_MASK 0xffffff
-#define BLK_CRYPTO_KEY_DUN_BYTES_SHIFT 24
- unsigned int hash;
-
- bool is_hw_wrapped;
- u8 raw[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
-};
-
-#define BLK_CRYPTO_MAX_IV_SIZE 32
-#define BLK_CRYPTO_DUN_ARRAY_SIZE (BLK_CRYPTO_MAX_IV_SIZE/sizeof(u64))
-
-static inline void
-blk_crypto_key_set_hash_and_dun_bytes(struct blk_crypto_key *key,
- u32 hash, unsigned int dun_bytes)
-{
- key->hash = (dun_bytes << BLK_CRYPTO_KEY_DUN_BYTES_SHIFT) |
- (hash & BLK_CRYPTO_KEY_HASH_MASK);
-}
-
-static inline u32
-blk_crypto_key_hash(const struct blk_crypto_key *key)
-{
- return key->hash & BLK_CRYPTO_KEY_HASH_MASK;
-}
-
-static inline unsigned int
-blk_crypto_key_dun_bytes(const struct blk_crypto_key *key)
-{
- return key->hash >> BLK_CRYPTO_KEY_DUN_BYTES_SHIFT;
-}
-
-/**
- * struct bio_crypt_ctx - an inline encryption context
- * @bc_key: the key, algorithm, and data unit size to use
- * @bc_keyslot: the keyslot that has been assigned for this key in @bc_ksm,
- * or -1 if no keyslot has been assigned yet.
- * @bc_dun: the data unit number (starting IV) to use
- * @bc_ksm: the keyslot manager into which the key has been programmed with
- * @bc_keyslot, or NULL if this key hasn't yet been programmed.
- *
- * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for
- * write requests) or decrypted (for read requests) inline by the storage device
- * or controller, or by the crypto API fallback.
- */
-struct bio_crypt_ctx {
- const struct blk_crypto_key *bc_key;
- int bc_keyslot;
-
- /* Data unit number */
- u64 bc_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
-
- /*
- * The keyslot manager where the key has been programmed
- * with keyslot.
- */
- struct keyslot_manager *bc_ksm;
-};
-
-int bio_crypt_ctx_init(void);
-
-struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
-
-void bio_crypt_free_ctx(struct bio *bio);
-
-static inline bool bio_has_crypt_ctx(struct bio *bio)
-{
- return bio->bi_crypt_context;
-}
-
-void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
-
-static inline void bio_crypt_set_ctx(struct bio *bio,
- const struct blk_crypto_key *key,
- u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
- gfp_t gfp_mask)
-{
- struct bio_crypt_ctx *bc = bio_crypt_alloc_ctx(gfp_mask);
-
- bc->bc_key = key;
- memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun));
- bc->bc_ksm = NULL;
- bc->bc_keyslot = -1;
-
- bio->bi_crypt_context = bc;
-}
-
-void bio_crypt_ctx_release_keyslot(struct bio_crypt_ctx *bc);
-
-int bio_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
- struct keyslot_manager *ksm);
-
-struct request;
-bool bio_crypt_should_process(struct request *rq);
-
-static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
- unsigned int bytes,
- u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
-{
- int i = 0;
- unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
-
- while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
- if (bc->bc_dun[i] + inc != next_dun[i])
- return false;
- inc = ((bc->bc_dun[i] + inc) < inc);
- i++;
- }
-
- return true;
-}
-
-
-static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
- unsigned int inc)
-{
- int i = 0;
-
- while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
- dun[i] += inc;
- inc = (dun[i] < inc);
- i++;
- }
-}
-
-static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
-{
- struct bio_crypt_ctx *bc = bio->bi_crypt_context;
-
- if (!bc)
- return;
-
- bio_crypt_dun_increment(bc->bc_dun,
- bytes >> bc->bc_key->data_unit_size_bits);
-}
-
-bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2);
-
-bool bio_crypt_ctx_mergeable(struct bio *b_1, unsigned int b1_bytes,
- struct bio *b_2);
-
-#else /* CONFIG_BLK_INLINE_ENCRYPTION */
-static inline int bio_crypt_ctx_init(void)
-{
- return 0;
-}
-
-static inline bool bio_has_crypt_ctx(struct bio *bio)
-{
- return false;
-}
-
-static inline void bio_crypt_clone(struct bio *dst, struct bio *src,
- gfp_t gfp_mask) { }
-
-static inline void bio_crypt_free_ctx(struct bio *bio) { }
-
-static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) { }
-
-static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
-{
- return true;
-}
-
-static inline bool bio_crypt_ctx_mergeable(struct bio *b_1,
- unsigned int b1_bytes,
- struct bio *b_2)
-{
- return true;
-}
-
-#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
-
-#if IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
-static inline void bio_set_skip_dm_default_key(struct bio *bio)
-{
- bio->bi_skip_dm_default_key = true;
-}
-
-static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
-{
- return bio->bi_skip_dm_default_key;
-}
-
-static inline void bio_clone_skip_dm_default_key(struct bio *dst,
- const struct bio *src)
-{
- dst->bi_skip_dm_default_key = src->bi_skip_dm_default_key;
-}
-#else /* CONFIG_DM_DEFAULT_KEY */
-static inline void bio_set_skip_dm_default_key(struct bio *bio)
-{
-}
-
-static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
-{
- return false;
-}
-
-static inline void bio_clone_skip_dm_default_key(struct bio *dst,
- const struct bio *src)
-{
-}
-#endif /* !CONFIG_DM_DEFAULT_KEY */
-
-#endif /* CONFIG_BLOCK */
-
-#endif /* __LINUX_BIO_CRYPT_CTX_H */
diff --git a/include/linux/bio.h b/include/linux/bio.h
index bcb7629..a0ee494 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -8,7 +8,6 @@
#include <linux/highmem.h>
#include <linux/mempool.h>
#include <linux/ioprio.h>
-#include <linux/bio-crypt-ctx.h>
#ifdef CONFIG_BLOCK
/* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index 30a0b32..c4dfe4b 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -6,13 +6,93 @@
#ifndef __LINUX_BLK_CRYPTO_H
#define __LINUX_BLK_CRYPTO_H
-#include <linux/bio.h>
+#include <linux/types.h>
+
+enum blk_crypto_mode_num {
+ BLK_ENCRYPTION_MODE_INVALID,
+ BLK_ENCRYPTION_MODE_AES_256_XTS,
+ BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+ BLK_ENCRYPTION_MODE_ADIANTUM,
+ BLK_ENCRYPTION_MODE_MAX,
+};
+
+#define BLK_CRYPTO_MAX_KEY_SIZE 64
+#define BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE 128
+
+/**
+ * struct blk_crypto_config - an inline encryption key's crypto configuration
+ * @crypto_mode: encryption algorithm this key is for
+ * @data_unit_size: the data unit size for all encryption/decryptions with this
+ * key. This is the size in bytes of each individual plaintext and
+ * ciphertext. This is always a power of 2. It might be e.g. the
+ * filesystem block size or the disk sector size.
+ * @dun_bytes: the maximum number of bytes of DUN used when using this key
+ * @is_hw_wrapped: @raw points to a wrapped key to be used by an inline
+ * encryption hardware that accepts wrapped keys.
+ */
+struct blk_crypto_config {
+ enum blk_crypto_mode_num crypto_mode;
+ unsigned int data_unit_size;
+ unsigned int dun_bytes;
+ bool is_hw_wrapped;
+};
+
+/**
+ * struct blk_crypto_key - an inline encryption key
+ * @crypto_cfg: the crypto configuration (like crypto_mode, key size) for this
+ * key
+ * @data_unit_size_bits: log2 of data_unit_size
+ * @size: size of this key in bytes (determined by @crypto_cfg.crypto_mode)
+ * @raw: the raw bytes of this key. Only the first @size bytes are used.
+ *
+ * A blk_crypto_key is immutable once created, and many bios can reference it at
+ * the same time. It must not be freed until all bios using it have completed
+ * and it has been evicted from all devices on which it may have been used.
+ */
+struct blk_crypto_key {
+ struct blk_crypto_config crypto_cfg;
+ unsigned int data_unit_size_bits;
+ unsigned int size;
+ u8 raw[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+};
+
+#define BLK_CRYPTO_MAX_IV_SIZE 32
+#define BLK_CRYPTO_DUN_ARRAY_SIZE (BLK_CRYPTO_MAX_IV_SIZE / sizeof(u64))
+
+/**
+ * struct bio_crypt_ctx - an inline encryption context
+ * @bc_key: the key, algorithm, and data unit size to use
+ * @bc_dun: the data unit number (starting IV) to use
+ *
+ * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for
+ * write requests) or decrypted (for read requests) inline by the storage device
+ * or controller, or by the crypto API fallback.
+ */
+struct bio_crypt_ctx {
+ const struct blk_crypto_key *bc_key;
+ u64 bc_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+};
+
+#include <linux/blk_types.h>
+#include <linux/blkdev.h>
+
+struct request;
+struct request_queue;
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
-int blk_crypto_submit_bio(struct bio **bio_ptr);
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+ return bio->bi_crypt_context;
+}
-bool blk_crypto_endio(struct bio *bio);
+void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key,
+ const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+ gfp_t gfp_mask);
+
+bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
+ unsigned int bytes,
+ const u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]);
int blk_crypto_init_key(struct blk_crypto_key *blk_key,
const u8 *raw_key, unsigned int raw_key_size,
@@ -21,40 +101,65 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key,
unsigned int dun_bytes,
unsigned int data_unit_size);
-int blk_crypto_start_using_mode(enum blk_crypto_mode_num crypto_mode,
- unsigned int dun_bytes,
- unsigned int data_unit_size,
- bool is_hw_wrapped_key,
- struct request_queue *q);
+int blk_crypto_start_using_key(const struct blk_crypto_key *key,
+ struct request_queue *q);
int blk_crypto_evict_key(struct request_queue *q,
const struct blk_crypto_key *key);
+bool blk_crypto_config_supported(struct request_queue *q,
+ const struct blk_crypto_config *cfg);
+
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
-static inline int blk_crypto_submit_bio(struct bio **bio_ptr)
+static inline bool bio_has_crypt_ctx(struct bio *bio)
{
- return 0;
-}
-
-static inline bool blk_crypto_endio(struct bio *bio)
-{
- return true;
+ return false;
}
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
-#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
-
-int blk_crypto_fallback_init(void);
-
-#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
-
-static inline int blk_crypto_fallback_init(void)
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+ const struct bio *src);
+void __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
+static inline void bio_crypt_clone(struct bio *dst, struct bio *src,
+ gfp_t gfp_mask)
{
- return 0;
+ bio_clone_skip_dm_default_key(dst, src);
+ if (bio_has_crypt_ctx(src))
+ __bio_crypt_clone(dst, src, gfp_mask);
}
-#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+#if IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+static inline void bio_set_skip_dm_default_key(struct bio *bio)
+{
+ bio->bi_skip_dm_default_key = true;
+}
+
+static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
+{
+ return bio->bi_skip_dm_default_key;
+}
+
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+ const struct bio *src)
+{
+ dst->bi_skip_dm_default_key = src->bi_skip_dm_default_key;
+}
+#else /* CONFIG_DM_DEFAULT_KEY */
+static inline void bio_set_skip_dm_default_key(struct bio *bio)
+{
+}
+
+static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
+{
+ return false;
+}
+
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+ const struct bio *src)
+{
+}
+#endif /* !CONFIG_DM_DEFAULT_KEY */
#endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 695dee7..17738da 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -43,7 +43,7 @@ struct pr_ops;
struct rq_qos;
struct blk_queue_stats;
struct blk_stat_callback;
-struct keyslot_manager;
+struct blk_keyslot_manager;
#define BLKDEV_MIN_RQ 4
#define BLKDEV_MAX_RQ 128 /* Default maximum */
@@ -224,6 +224,11 @@ struct request {
unsigned short nr_integrity_segments;
#endif
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+ struct bio_crypt_ctx *crypt_ctx;
+ struct blk_ksm_keyslot *crypt_keyslot;
+#endif
+
unsigned short write_hint;
unsigned short ioprio;
@@ -477,7 +482,7 @@ struct request_queue {
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
/* Inline crypto capabilities */
- struct keyslot_manager *ksm;
+ struct blk_keyslot_manager *ksm;
#endif
unsigned int rq_timeout;
@@ -1557,6 +1562,12 @@ struct blk_integrity *bdev_get_integrity(struct block_device *bdev)
return blk_get_integrity(bdev->bd_disk);
}
+static inline bool
+blk_integrity_queue_supports_integrity(struct request_queue *q)
+{
+ return q->integrity.profile;
+}
+
static inline bool blk_integrity_rq(struct request *rq)
{
return rq->cmd_flags & REQ_INTEGRITY;
@@ -1637,6 +1648,11 @@ static inline struct blk_integrity *blk_get_integrity(struct gendisk *disk)
{
return NULL;
}
+static inline bool
+blk_integrity_queue_supports_integrity(struct request_queue *q)
+{
+ return false;
+}
static inline int blk_integrity_compare(struct gendisk *a, struct gendisk *b)
{
return 0;
@@ -1688,6 +1704,25 @@ static inline struct bio_vec *rq_integrity_vec(struct request *rq)
#endif /* CONFIG_BLK_DEV_INTEGRITY */
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q);
+
+void blk_ksm_unregister(struct request_queue *q);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline bool blk_ksm_register(struct blk_keyslot_manager *ksm,
+ struct request_queue *q)
+{
+ return true;
+}
+
+static inline void blk_ksm_unregister(struct request_queue *q) { }
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+
struct block_device_operations {
int (*open) (struct block_device *, fmode_t);
void (*release) (struct gendisk *, fmode_t);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index efa30e1..4dc50d7 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1379,6 +1379,7 @@ extern int send_sigurg(struct fown_struct *fown);
#define SB_NODIRATIME 2048 /* Do not update directory access times */
#define SB_SILENT 32768
#define SB_POSIXACL (1<<16) /* VFS does not apply the umask */
+#define SB_INLINECRYPT (1<<17) /* Use blk-crypto for encrypted files */
#define SB_KERNMOUNT (1<<22) /* this is a kern_mount call */
#define SB_I_VERSION (1<<23) /* Update inode I_version field */
#define SB_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index 66d513c..516f564 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -69,7 +69,6 @@ struct fscrypt_operations {
bool (*has_stable_inodes)(struct super_block *sb);
void (*get_ino_and_lblk_bits)(struct super_block *sb,
int *ino_bits_ret, int *lblk_bits_ret);
- bool (*inline_crypt_enabled)(struct super_block *sb);
int (*get_num_devices)(struct super_block *sb);
void (*get_devices)(struct super_block *sb,
struct request_queue **devs);
@@ -558,23 +557,22 @@ static inline void fscrypt_set_ops(struct super_block *sb,
/* inline_crypt.c */
#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
-extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode);
-extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode);
+bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode);
-extern void fscrypt_set_bio_crypt_ctx(struct bio *bio,
- const struct inode *inode,
- u64 first_lblk, gfp_t gfp_mask);
+void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+ const struct inode *inode, u64 first_lblk,
+ gfp_t gfp_mask);
-extern void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
- const struct buffer_head *first_bh,
- gfp_t gfp_mask);
+void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+ const struct buffer_head *first_bh,
+ gfp_t gfp_mask);
-extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
- u64 next_lblk);
+bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+ u64 next_lblk);
-extern bool fscrypt_mergeable_bio_bh(struct bio *bio,
- const struct buffer_head *next_bh);
+bool fscrypt_mergeable_bio_bh(struct bio *bio,
+ const struct buffer_head *next_bh);
bool fscrypt_dio_supported(struct kiocb *iocb, struct iov_iter *iter);
@@ -582,16 +580,12 @@ int fscrypt_limit_dio_pages(const struct inode *inode, loff_t pos,
int nr_pages);
#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
-static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+
+static inline bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode)
{
return false;
}
-static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
-{
- return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
-}
-
static inline void fscrypt_set_bio_crypt_ctx(struct bio *bio,
const struct inode *inode,
u64 first_lblk, gfp_t gfp_mask) { }
@@ -644,6 +638,36 @@ fscrypt_inode_should_skip_dm_default_key(const struct inode *inode)
#endif
/**
+ * fscrypt_inode_uses_inline_crypto() - test whether an inode uses inline
+ * encryption
+ * @inode: an inode. If encrypted, its key must be set up.
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ * encryption should be done in the block layer via blk-crypto rather
+ * than in the filesystem layer.
+ */
+static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+ return fscrypt_needs_contents_encryption(inode) &&
+ __fscrypt_inode_uses_inline_crypto(inode);
+}
+
+/**
+ * fscrypt_inode_uses_fs_layer_crypto() - test whether an inode uses fs-layer
+ * encryption
+ * @inode: an inode. If encrypted, its key must be set up.
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ * encryption should be done in the filesystem layer rather than in the
+ * block layer via blk-crypto.
+ */
+static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+ return fscrypt_needs_contents_encryption(inode) &&
+ !__fscrypt_inode_uses_inline_crypto(inode);
+}
+
+/**
* fscrypt_require_key() - require an inode's encryption key
* @inode: the inode we need the key for
*
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
index f5e0eed..3910fb8 100644
--- a/include/linux/keyslot-manager.h
+++ b/include/linux/keyslot-manager.h
@@ -7,6 +7,7 @@
#define __LINUX_KEYSLOT_MANAGER_H
#include <linux/bio.h>
+#include <linux/blk-crypto.h>
/* Inline crypto feature bits. Must set at least one. */
enum {
@@ -19,10 +20,10 @@ enum {
#ifdef CONFIG_BLK_INLINE_ENCRYPTION
-struct keyslot_manager;
+struct blk_keyslot_manager;
/**
- * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware
+ * struct blk_ksm_ll_ops - functions to manage keyslots in hardware
* @keyslot_program: Program the specified key into the specified slot in the
* inline encryption hardware.
* @keyslot_evict: Evict key from the specified keyslot in the hardware.
@@ -37,66 +38,104 @@ struct keyslot_manager;
* a keyslot manager - this structure holds the function ptrs that the keyslot
* manager will use to manipulate keyslots in the hardware.
*/
-struct keyslot_mgmt_ll_ops {
- int (*keyslot_program)(struct keyslot_manager *ksm,
+struct blk_ksm_ll_ops {
+ int (*keyslot_program)(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key,
unsigned int slot);
- int (*keyslot_evict)(struct keyslot_manager *ksm,
+ int (*keyslot_evict)(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key,
unsigned int slot);
- int (*derive_raw_secret)(struct keyslot_manager *ksm,
+ int (*derive_raw_secret)(struct blk_keyslot_manager *ksm,
const u8 *wrapped_key,
unsigned int wrapped_key_size,
u8 *secret, unsigned int secret_size);
};
-struct keyslot_manager *keyslot_manager_create(
- struct device *dev,
- unsigned int num_slots,
- const struct keyslot_mgmt_ll_ops *ksm_ops,
- unsigned int features,
- const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
- void *ll_priv_data);
+struct blk_keyslot_manager {
+ /*
+ * The struct blk_ksm_ll_ops that this keyslot manager will use
+ * to perform operations like programming and evicting keys on the
+ * device
+ */
+ struct blk_ksm_ll_ops ksm_ll_ops;
-void keyslot_manager_set_max_dun_bytes(struct keyslot_manager *ksm,
- unsigned int max_dun_bytes);
+ /*
+ * The maximum number of bytes supported for specifying the data unit
+ * number.
+ */
+ unsigned int max_dun_bytes_supported;
-int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key);
+ /*
+ * The supported features as a bitmask of BLK_CRYPTO_FEATURE_* flags.
+ * Most drivers should set BLK_CRYPTO_FEATURE_STANDARD_KEYS here.
+ */
+ unsigned int features;
-void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot);
+ /*
+ * Array of size BLK_ENCRYPTION_MODE_MAX of bitmasks that represents
+ * whether a crypto mode and data unit size are supported. The i'th
+ * bit of crypto_mode_supported[crypto_mode] is set iff a data unit
+ * size of (1 << i) is supported. We only support data unit sizes
+ * that are powers of 2.
+ */
+ unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
-void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot);
+ /* Device for runtime power management (NULL if none) */
+ struct device *dev;
-bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
- enum blk_crypto_mode_num crypto_mode,
- unsigned int dun_bytes,
- unsigned int data_unit_size,
- bool is_hw_wrapped_key);
+ /* Here onwards are *private* fields for internal keyslot manager use */
-int keyslot_manager_evict_key(struct keyslot_manager *ksm,
- const struct blk_crypto_key *key);
+ unsigned int num_slots;
-void keyslot_manager_reprogram_all_keys(struct keyslot_manager *ksm);
+ /* Protects programming and evicting keys from the device */
+ struct rw_semaphore lock;
-void *keyslot_manager_private(struct keyslot_manager *ksm);
+ /* List of idle slots, with least recently used slot at front */
+ wait_queue_head_t idle_slots_wait_queue;
+ struct list_head idle_slots;
+ spinlock_t idle_slots_lock;
-void keyslot_manager_destroy(struct keyslot_manager *ksm);
+ /*
+ * Hash table which maps struct *blk_crypto_key to keyslots, so that we
+ * can find a key's keyslot in O(1) time rather than O(num_slots).
+ * Protected by 'lock'.
+ */
+ struct hlist_head *slot_hashtable;
+ unsigned int log_slot_ht_size;
-struct keyslot_manager *keyslot_manager_create_passthrough(
- struct device *dev,
- const struct keyslot_mgmt_ll_ops *ksm_ops,
- unsigned int features,
- const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
- void *ll_priv_data);
+ /* Per-keyslot data */
+ struct blk_ksm_keyslot *slots;
+};
-void keyslot_manager_intersect_modes(struct keyslot_manager *parent,
- const struct keyslot_manager *child);
+int blk_ksm_init(struct blk_keyslot_manager *ksm, unsigned int num_slots);
-int keyslot_manager_derive_raw_secret(struct keyslot_manager *ksm,
- const u8 *wrapped_key,
- unsigned int wrapped_key_size,
- u8 *secret, unsigned int secret_size);
+blk_status_t blk_ksm_get_slot_for_key(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key,
+ struct blk_ksm_keyslot **slot_ptr);
+
+unsigned int blk_ksm_get_slot_idx(struct blk_ksm_keyslot *slot);
+
+void blk_ksm_put_slot(struct blk_ksm_keyslot *slot);
+
+bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_config *cfg);
+
+int blk_ksm_evict_key(struct blk_keyslot_manager *ksm,
+ const struct blk_crypto_key *key);
+
+void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm);
+
+void blk_ksm_destroy(struct blk_keyslot_manager *ksm);
+
+void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
+ const struct blk_keyslot_manager *child);
+
+int blk_ksm_derive_raw_secret(struct blk_keyslot_manager *ksm,
+ const u8 *wrapped_key,
+ unsigned int wrapped_key_size,
+ u8 *secret, unsigned int secret_size);
+
+void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm);
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */