Claude 4.0 Did in 3 Hours What Would’ve Taken Me Months
As previously stated, UI/UX is not my forte. So, today we will try to use the power of Gen AI to apply facelift on existing react-native project.
In the previous post we've looked at different AI models available in github copilot in Agent mode https://dev.to/dmitryame/design-smarter-testing-top-llms-for-mobile-interface-optimization-k89
Based on that post, the best model which will work for us is Claude 4.0.
It took two iterations -- about 3 hours combined, 22 commits total. In the first iteration I hit the rate limit, so had no choice but to punt it to the next day. Still, the amount of work that I was able to accomplish even before I hit the limit is staggering. What I was able to complete in few hours would usually take me weeks or even months.
Unlike in the previous post, where I gave different Models a high level prompt, asking to make general UI improvements, this time I went one component at a time and was very specific. I will not be sharing the details of my interactions with the Agent (it would be overwhelming) -- I will be sharing only the end results 2 screens (before and after) with some brief notes, and it will be self explanatory.
PhotosList component (landing screen)
improvements:
- top nav bar
- thumbnails
- more rounder corners
- redesigned indicator for video
- footer redesign
- more consistent
- modern looking buttons
animation of the top nav bar (hiding text labels on scroll)
Drawer navigation (hamburger menu)
Starred Photos (empty list)
Search Photos (empty list)
improvements:
- input text box
- action button
Thumbs with comments on Starred and Search Photos
Photo Details screen
improvements:
- dark theme
- top nav bar
- more modern card view approach for components:
- comments
- AI recognized tag, text, moderation
- label based design for the individual tags
- better colors
- bottom footer redesign
Add comments screen
Zoom View Screen
Overall Tablet improvements
photos list
detailed photo view
Works on Android too (love react-native)
Extras bonus
I noticed, as the Agent was making improvements, in one of the commits it added support for Haptic feedback -- I will take it as a free bonus. Unfortunately not able to test it until the changes rolled out to prod, but hopefully they just work.
Using the right LLM for the right purpose
After making all these cool updates, releasing the mobile app to the app store is still task on its own. Especially after all these UI changes -- need to update the app store screenshots. I don't even have the right tools for slicing and dicing the images, but I do have ChatGPT, let's give it a try.
prompt: generate iTunes store screen shot image, use first image as the the most up to date screenshot and the second image as a guidelines for design, the dimensions should be 1242 × 2688px
prompt: take the mobile phone content area only from the first image and apply it to the second image in the correct place of the phone screen
Chat GPT:
prompt: you lost the headline from the second photo, apply it to the results
Chat GPT:
Well, not exactly what I was hoping for, and kind of funny results, oh well, some day soon I'm sure it will be able to do it correctly.
Let's try the same prompt with Claude 4.0 model in copilot Agent mode.
prompt:
generate iTunes store screen shot image, the output dimensions should be 1242 × 2688px, apply the screenshot which is #iphone1_1.png to the phone screen image on the #iphone1.png maintaining the proper expected screen size, position, rotation, preserve the headline on the top of the iphone1.png, final image output file name iphone1_2.png
Claude: Sorry, your request failed. Please try again. Request id: 4bd0b681-6758-4471-9c91-9c17a0231be9
Reason: Please check your firewall rules and network connection then try again. Error Code: net::ERR_HTTP2_PROTOCOL_ERROR.
and it fails today... it probably ran out of limit for the day... will have to try again tomorrow... stay tuned for more updates.
Top comments (0)