Configuring your own AWS S3 bucket to work with BEE Plugin

Custom S3 Bucket is a BEE Plugin application configuration feature that allows you to easily connect your own Amazon Web Services S3 bucket to your BEE Plugin application.

By leveraging this feature, you will be able to store and manage your customers' assets without having to build a new File System Provider, but rather by providing a compliant folder structure and filling out a simple form.

This feature is available to all subscription plans, including Free.

 

How are images stored?

Our default file system provider uses two first level folders to manage assets:

  • Images folder - It defines where the user's images will be stored.
  • Thumbnails folder - Is used by our API to store the thumbnails of the uploaded images.

These folders can be root folders or can be part of a more complex directory structure.

A few notes and recommendations:

  • These folders should not be parents/children between themselves.
  • Their name is restricted by AWS standard naming restrictions.
  • For performance reasons, you should use a dedicated bucket and place these folders in the root.
  • The S3 bucket must be publicly accessible.

 

Shared assets
As an additional configuration option, you can provide shared files to your users, something that we do in the free version of the BEE editor at beefree.io.

These images are shown to all your customers as read-only assets.

The most common use case is providing sample images for the user's first experience with the editor. Other use cases include providing application-specific images or documents that must not be deleted by the user.

To use this option you need to set-up two additional folders:

  • Shared images folder - This is the folder that your users will browse through the file manager.
  • Shared thumbnails folder - While the user images thumbnails are created when the images are uploaded, there is no automatic thumbnail creation for shared images. You must provide your own thumbnails using these settings:
    • 200px as max. width/height (this guarantee a correct preview in the file manager)
    • Name: original_image_name.ext_thumb.png (so the thumbnail for cat.jpg must be cat.jpg_thumb.png)
    • PNG: use only PNG as image format

 

Filling out the form to connect your AWS S3 bucket

Once you have set up a compliant folder structure, you can use the form in the developer portal to connect your application. It's one of the available server-side configurations for your BEE Plugin application (Application details > Open configuration > Storage options).

This is a description of the form fields and what information you will need to provide in each of them:

  • Bucket name Required field
    The name you assigned to the bucket when you created.

  • Access key & Access secret key Required fields
    You can provide AWS Root Account Credentials or IAM User Credentials (we recommend the second option for security reasons). The provided account must have read and write access to the given bucket. More about AWS credentials.

  • Select region Required field
    AWS region where you created the bucket.
    Required field, uses EU as the default setting.

  • Images path Required field 
    The relative path (from the bucket root) to the images folder described above.
    Must use "/" symbol as the path delimiter.
  • Thumbnail path Required field 
    The relative path (from the bucket root) to the thumbnails folder described above.
    Must use "/" symbol as the path delimiter.

  • Shared images path Optional field
    The relative path (from the bucket root) to the shared images folder described above. Cannot be the bucket root.
    Must use "/" symbol as the path delimiter.

  • Shared thumbnails path Optional field 
    The relative path (from the bucket root) to the shared thumbnails folder described above. Cannot be the bucket root.
    Must use "/" symbol as the path delimiter.

 

Testing your settings

The button will become active once all required fields have been correctly filled out. It allows you to test your settings before saving the updated configuration. We recommend that you do so before saving any changes.

Remember to save your changes with the SAVE button at the top.

 

Moving from the default S3 bucket

If your BEE Plugin application is currently using the default S3 bucket, you wish to switch to your own bucket, and you have files that you want to transfer between the two, please contact us.

Have more questions? Submit a request

Comments

  • Avatar
    Tom

    Does this work with a non-AWS S3 provider? Or would that require creating a File System Provider?

  • Avatar
    Guille Padilla

    Hi Tom, this works only with AWS.
    If you need to use a different storage or set-up a different behavior for directory management, you must consider using a Custom File System Provider as described in:
    http://help.beefree.io/hc/en-us/articles/204845881-Connecting-BEE-with-your-image-file-storage

  • Avatar
    Michael Fritz

    Hi, the functionality to connect to AWS buckets is great. How can i use different buckets for different clients though? I only see an option to connect to a single bucket which would not make sense for different users/customers if they all stored their pictures in the same folder. Thanks

  • Avatar
    Guille Padilla

    Hi Michael, the connector works with a single bucket. Folders are not shared between users if you correctly identify them.
    Check this article about the UID, which is the param used to recognize users: http://help.beefree.io/hc/en-us/articles/208185235-How-does-the-UID-parameter-work-

    If you need a different behaviour, you can build an connect an FSP API as described here: http://help.beefree.io/hc/en-us/articles/204845881-Connecting-BEE-with-your-image-file-storage
    You can modify the FSP to accomplish your needs.

  • Avatar
    Ryan

    Hi, I am having problems with using my own S3 bucket. It tries to load images over https which throws a privacy error - is there any way to force it to load over http?

  • Avatar
    Sergio M.

    Hi Ryan, just a quick follow-up: were you able to connect to your S3 bucket?