LucidLink is a log structured distributed file system for object storage. We write to the object store in a unique data layout which allows random read access and data streaming. Our data layout is not your traditional 1 file 1 object, we lay out our objects against a block size therefore 1 file could result in multiple objects, depending on your file size and block size (default 256KiB).


A file space leverages metadata coordination and synchronization to make requests on individual blocks within the object store. This dramatically improves performance as we don't need to retrieve data not yet required. The result is, when data is read, we don't always need to read the whole file and all objects. We can stream only the data required, when the application/user requires it. 


Equally. When it comes to writing, we create new objects for fresh data, and garbage collect expired objects. Objects within the object store are encrypted and compressed at the client-side, therefore cannot be accessed from the object store directly. 


If the default 256KiB block size doesn't suit your data type, you can initialize a file space with a preferred block size. File space block size is determined at initialization and cannot be changed. To initialize your file space, please follow the steps below, and do not hesitate to reach out to LucidLink Support for assistance. 


Step 1:

  1. Create a file space

  2. Define file space name

  3. Choose Your or LucidLink storage

  4. Select Your Cloud Provider and Region 

  5. Specify a HTTP or HTTPS endpoint URL (including PORT if required)

  6. Optional: specify Region if required

  7. Optional: specify Bucket name if required

  8. Review and confirm, finally Create file space

Step 2:


At this point that the file space is being set up - this will take a minute or two, please be patient. Once setup is complete and the portal tile is waiting on Initialize Next you will proceed to initialization of your file space with your object storage credentials from the command line with the --block-size option.


Ensure that the LucidLink OS client is downloaded and installed on the machine performing the initialization. 


At the point that OS client has been installed, please ensure it is open and is prompting to "connect", please leave this window open because in the background the LucidLink OS client daemon/service is successfully running


Alternatively, if you are running an OS client without GUI (Linux) you can launch a LucidLink OS client daemon from your command line with "lucid daemon". Please ensure lucid daemon remains running in the background.


Open command line, terminal, depending on what OS client you are using and enter the following command, making sure to supply the full File space and Domain, Shared secret, Endpoint:port, Credentials and Block size in Kilobytes required, followed by a Provider "text" as a simple 1 word vendor identity ie. AWS


lucid init-s3 --fs <filespace.domain> --password <sharedsecret> --endpoint <ipadddress/url:port> --access-key <access-key> --secret-key <secret-key> --https --region <region> --block-size <KiB> --Provider <vendor>


Please consult "lucid help init-s3" for additional initialization options  such as --bucket-name. Should you want to see the full list of initialization parameters to trouble shoot please reach out via our LucidLink Support.


You should receive "Daemon init request sent." for a successful initialization although should your initial attempt not complete successfully, additional information for guidance might be provided in the error output, please update your combination of parameters and try again. 


You can try multiple times, with parameters such as --http instead of --https for example, and in certain circumstances parameters such as region are case sensitive. Until your initialization is successful your file space will be waiting patiently in the portal for this step to complete.


Note:  If you receive "Connection refused" please ensure to open the OS client or launch a command line/terminal daemon with "lucid daemon". Also note the endpoint URL does not require http:// or https:// like it did through the portal as we supply this manually through our command line options of --https or --http