Uploading files to AWS S3 Bucket using Spring Boot | ORIL

Uploading files to AWS S3 Bucket using Spring Boot

Uploading files to AWS S3 Bucket using Spring Boot

Account Configuration

To start using S3 Bucket you need to create an account on Amazon website. Registration procedure is easy and clear enough, but you will have to verify your phone number and enter your credit card info (don’t worry, your card will not be charged if you only buy some services).

After account creation we need to create an s3 bucket. Go to Services -> S3. Or enter ‘S3’ in the search field.

Then press the ‘Create bucket’ button.

Enter your bucket name (should be unique) and choose the region that is closest to you. Press the ‘Create’ button.

NOTE: Amazon will give you 5GB of storage for free, 20,000 Get Requests, 2,000 Put Requests for the first year. After reaching this limit you will have to pay for using it.

Now your bucket is created but we need to give permission for users to access this bucket. It is not secured to give the access keys of your root user to your developer team or someone else. We need to create a new IAM user and give them permission to use only S3 Bucket.

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources.

Let’s create such a user. Go to Services -> IAM. In the navigation panel, choose Users and then press the ‘Create user’ button.

Enter the user’s name and press the ‘Next’ button.

Then we need to set the permissions for this user.
Select ‘Attach policies directly’. In the search field enter ‘s3full’ and choose AmazonS3FullAccess.

Then press next and ‘Create User’. If you did everything right then you should see a new user in your list of users.

The next step is to create an access key for this user. Open the user’s details by clicking on the user name and click ‘Create access key’ link.

Among the list of access options please choose the one that fits your needs. In this example we will choose ‘Application running outside AWS’.

Then press next and add an optional tag if needed and press ‘Create access key’.

On the next screen you will see your access key and secret access key. Please save those values and download a .csv file, because you will not be able to view secret key later.

 

Our S3 Bucket configuration is done so let’s proceed to the Spring Boot application.

Spring Boot Part

Let’s create Spring Boot project and add amazon dependency

<dependency>
   <groupId>com.amazonaws</groupId>
   <artifactId>aws-java-sdk</artifactId>
   <version>1.12.581</version>
</dependency>

Now let’s add s3 bucket properties to our application.yml file:

amazonProperties:
  accessKey: XXXXXXXXXXXXXXXXX
  secretKey: XXXXXXXXXXXXXXXXXXXXXXXXXX
  bucketName: your-bucket-name

It’s time to create our RestController with four endpoints

“/files/upload” – to upload file
“/files/{fileName}/base64” – get file as base64 string by filename
“/files/{fileName}/download” – download file by filename
“/files/{fileName:.+}” – delete file by filename

@RestController
public class FileController {


    private FileManagerService fileManager;


    @Autowired
    FileController(FileManagerService fileManager) {
        this.fileManager = fileManager;
    }


    @PostMapping("/files/upload")
    public ResponseEntity<SavedFileDTO> uploadFile(@RequestBody FileDTO fileDTO) {
        return ResponseEntity.ok(fileManager.uploadFile(fileDTO));
    }


    @GetMapping("/files/{fileName}/base64")
    public ResponseEntity<String> getFileInBase64(@PathVariable("fileName") String fileName) {
        return ResponseEntity.ok(fileManager.getFileInBase64(fileName));
    }


    @GetMapping("/files/{fileName}/download")
    public ResponseEntity<Resource> downloadFile(@PathVariable("fileName") String fileName) {
        byte[] content = fileManager.getFileAsBytes(fileName);
        return ResponseEntity.ok()
                .header(HttpHeaders.CONTENT_TYPE, getFileMediaType(fileName))
                .header(HttpHeaders.CONTENT_DISPOSITION, MediaType.APPLICATION_OCTET_STREAM_VALUE)
                .header(HttpHeaders.CONTENT_LENGTH, String.valueOf(content.length))
                .body(new ByteArrayResource(content));
    }


    @DeleteMapping("/files/{fileName:.+}")
    public ResponseEntity<Void> deleteFile(@PathVariable("fileName") String fileName) {
        fileManager.deleteFile(fileName);
        return ResponseEntity.ok().build();
    }


    private String getFileMediaType(String fileName) {
        String mediaType;
        String fileExtension = fileName.substring(fileName.lastIndexOf('.') + 1);
        switch (fileExtension.toLowerCase()) {
            case "pdf":
                mediaType = MediaType.APPLICATION_PDF_VALUE;
                break;
            case "png":
                mediaType = MediaType.IMAGE_PNG_VALUE;
                break;
            case "jpeg":
                mediaType = MediaType.IMAGE_JPEG_VALUE;
                break;
            default:
                mediaType = MediaType.TEXT_PLAIN_VALUE;
        }
        return mediaType;
    }
}

Method for uploading file accepts FileDTO as a request body. Here is how this class looks, just two fields filename and base64, because we will be sending a file to this endpoint as base64 string.

public class FileDTO {


    private String fileName;
    private String base64;


    public String getFileName() {
        return fileName;
    }


    public void setFileName(String fileName) {
        this.fileName = fileName;
    }


    public String getBase64() {
        return base64;
    }


    public void setBase64(String base64) {
        this.base64 = base64;
    }
}

This code is actually broken because we don’t have an AmazonClient class yet and a FileManagerService class, so let’s create these classes and add all the methods we need.

AmazonClient will have the following fields and methods:

@Component
public class AmazonClient {


    private final Logger logger = LoggerFactory.getLogger(AmazonClient.class);


    private AmazonS3 s3client;


    @Value("${amazonProperties.bucketName}")
    private String bucketName;
    @Value("${amazonProperties.accessKey}")
    private String accessKey;
    @Value("${amazonProperties.secretKey}")
    private String secretKey;


    @PostConstruct
    private void initializeAmazonClient() {
        AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
        this.s3client = AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(credentials))
                .withRegion(Regions.US_EAST_1).build();
        createBucket();
    }


    public void uploadFileToBucket(String fileName, File file, String folderToUpload) {
        logger.info("Uploading file {} to {}", fileName, folderToUpload);
        s3client.putObject(new PutObjectRequest(bucketName, folderToUpload + "/" + fileName, file));
    }


    public void deleteFileFromBucket(String filename, String folderName) {
        logger.info("Deleting file {} from {}", filename, folderName);
        DeleteObjectRequest delObjReq = new DeleteObjectRequest(bucketName, folderName + "/" + filename);
        s3client.deleteObject(delObjReq);
    }


    public void deleteMultipleFilesFromBucket(List<String> files) {
        DeleteObjectsRequest delObjReq = new DeleteObjectsRequest(bucketName)
                .withKeys(files.toArray(new String[0]));
        logger.info("Deleting files...");
        s3client.deleteObjects(delObjReq);
    }


    public File getFileFromBucket(String filename, String folderName) {
        InputStream inputStream = getFileInputStream(filename, folderName);
        File file = new File(filename);
        try {
            FileUtils.copyInputStreamToFile(inputStream, file);
        } catch (IOException e) {
            logger.error(ExceptionUtils.getStackTrace(e));
            return file;
        }
        return file;
    }


    public InputStream getFileInputStream(String filename, String folderName) {
        S3Object s3object = s3client.getObject(bucketName, folderName + "/" + filename);
        return s3object.getObjectContent();
    }


    private void createBucket() {
        if (s3client.doesBucketExistV2(bucketName)) {
            logger.info("Bucket {} already exists", bucketName);
            return;
        }
        try {
            logger.info("Creating bucket {}", bucketName);
            s3client.createBucket(bucketName);
        } catch (Exception e) {
            logger.error((ExceptionUtils.getStackTrace(e)));
        }
    }
}

AmazonS3 is a class from amazon dependency. All other fields are just a representation of variables from our application.yml file. The @Value annotation will bind application properties directly to class fields during application initialization.

We added @PostConstruct method initializeAmazonClient() to set amazon credentials to amazon client. Annotation @PostConstruct is needed to run this method after the constructor will be called, because class fields marked with @Value annotation are null in the constructor. createBucket() method is also called to create a S3 bucket if it doesn’t exist yet.

All other methods represent uploading, deleting and getting files from s3-bucket.

Now let’s see what fields and methods are available in our FileManagerService class.

private static final String UPLOAD_FOLDER_NAME = "public-files";
private final AmazonClient amazonClient;

These fields are just a folder name where our files will be stored and our AmazonClient that we created earlier.

public SavedFileDTO uploadFile(FileDTO fileDTO) {
    SavedFileDTO savedFile = new SavedFileDTO();
    savedFile.setGeneratedFileName(generateFileName(fileDTO));
    savedFile.setOriginalFileName(fileDTO.getFileName());
    File file = convertBase64ToFile(fileDTO.getBase64(), fileDTO.getFileName());
    this.amazonClient.uploadFileToBucket(savedFile.getGeneratedFileName(), file, UPLOAD_FOLDER_NAME);
    savedFile.setUploadedAt(new Date());
    try {
        FileUtils.forceDelete(file);
    } catch (IOException e) {
        throw new RuntimeException(e);
    }
    return savedFile;
}

The method above generates a unique filename, converts base64 string to File and uploads it to bucket. This method calls two other methods for generating name and converting:

private String generateFileName(FileDTO fileDTO) {
        String name = fileDTO.getFileName().replaceAll("[^a-zA-Z0-9.-]", "_");
        return (new Date().getTime() + "_" + name);
}

private File convertBase64ToFile(String base64Content, String filename) {
        byte[] decodedContent = Base64.getDecoder().decode(base64Content.getBytes(StandardCharsets.UTF_8));
        return bytesToFile(decodedContent, filename);
}

private File bytesToFile(byte[] content, String fileName) {
        File file = new File(fileName);
        try (FileOutputStream fos = new FileOutputStream(file)) {
            fos.write(content);
        } catch (IOException e) {
            return null;
        }
        return file;
}

The next two methods shows how to get file as base64 string or as bytes:

public String getFileInBase64(String fileName) {
        File file = amazonClient.getFileFromBucket(fileName, UPLOAD_FOLDER_NAME);
        try {
            return Base64.getEncoder().encodeToString(FileUtils.readFileToByteArray(file));
        } catch (IOException e) {
            e.printStackTrace();
        }
        return null;
}

public byte[] getFileAsBytes(String fileName) {
        InputStream inputStream = amazonClient.getFileInputStream(fileName, UPLOAD_FOLDER_NAME);
        try {
            return IOUtils.toByteArray(inputStream);
        } catch (IOException e) {
            e.printStackTrace();
        }
        return new byte[0];
}

And the last method is just deletion of file:

public void deleteFile(String fileName) {
    amazonClient.deleteFileFromBucket(fileName, UPLOAD_FOLDER_NAME);
}

NOTE: Every time when we upload, get or delete files from S3 bucket we need to specify folder name as well.

Testing time

Let’s test our application by making requests using Postman. We need to choose the POST method, in the Body we need to add json with two fields: fileName and base64. You can convert any file to base64 using online converter.

The endpoint url is: http://localhost:8080/files/upload.

If you did everything correctly then you should receive similar response body:

{
    "originalFileName": "testfile.jpg",
    "generatedFileName": "1699358552994_testfile.jpg",
    "uploadedAt": "2023-11-07T12:02:34.644+00:00"
}

And if you open your S3 bucket on Amazon then you should see one uploaded image there.

Now let’s test our delete method. Choose DELETE method with endpoint url: http://localhost:8080/files/1699358552994_testfile.jpg.

If the file is deleted successfully you should receive http status 204 No Content.

Let’s upload another file and test getting file as base64 string using GET method with url

http://localhost:8080/files/1699358552994_testfile.jpg/base64

You should receive a base64 string as a response:

Conclusion

That’s basically it. Now you can easily use S3 bucket in your own projects. Hope this was helpful for you. If you have any questions please feel free to leave a comment. Thank you for reading.

You can check a full example of this application on Oril Software GitHub.

#Java

#SpringBoot

Flyway With Spring Boot And MySQL

As many development teams now use automated database migration, you should know what tools can help you configure and maintain database schemas. Using Flyway is just the right solution if you have a basic understanding of SQL. Database migration using Flyway is quickly gaining popularity in software development because it’s easy to use and can […]

Ihor Kosandyak Avatar
Ihor Kosandyak

15 Aug, 2023 · 4 min read

Spring Cloud Gateway security with JWT

There is a clear understanding that everything that is exposed to the Internet should be secured. Especially when you create software and work with sensitive user data, such as emails, phone numbers, addresses, credit cards, etc. Here we will go through securing API Gateway with Json Web Tokens(JWT). As far as you probably know Spring […]

Ihor Kosandyak Avatar
Ihor Kosandyak

26 Feb, 2021 · 4 min read

Secure your Spring Boot API with JSON Web Tokens

If you are reading this article I assume you are a bit familiar with Spring Boot and building API using it. Because the main purpose of this article is to show you a simple way how to make your API more secured.

Ihor Sokolyk Avatar
Ihor Sokolyk

24 Nov, 2023 · 6 min read